Bug 1820728 - [RHEL-8.2] Reran the gdeploy conf file for backend setup ,failure encountered .
Summary: [RHEL-8.2] Reran the gdeploy conf file for backend setup ,failure encountered .
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Prajith
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-03 17:18 UTC by Mugdha Soni
Modified: 2020-06-16 05:56 UTC (History)
6 users (show)

Fixed In Version: gdeploy-3.0.0-6.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-16 05:56:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2577 0 None None None 2020-06-16 05:56:20 UTC

Description Mugdha Soni 2020-04-03 17:18:41 UTC
Description of problem:
------------------------------------------------------------
Re-running the back end setup conf file after cleaning up the bricks,vg , pv and removing entries from vi/etc/fstab ,gdeploy is unable to label the bricks.

failed: [10.70.47.31] (item={'device': '/dev/vg1/lv1', 'path': '/redhat/brick1'}) => {"ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "ansible_loop_var": "item", "changed": true, "cmd": "semanage fcontext -a -t glusterd_brick_t /redhat/brick1", "delta": "0:00:00.754554", "end": "2020-04-03 22:02:21.157970", "item": {"device": "/dev/vg1/lv1", "path": "/redhat/brick1"}, "msg": "non-zero return code", "rc": 1, "start": "2020-04-03 22:02:20.403416", "stderr": "ValueError: File context for /redhat/brick1 already defined", "stderr_lines": ["ValueError: File context for /redhat/brick1 already defined"], "stdout": "", "stdout_lines": []}

failed: [10.70.47.31] (item={'device': '/dev/vg2/lv2', 'path': '/redhat/brick2'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "semanage fcontext -a -t glusterd_brick_t /redhat/brick2", "delta": "0:00:00.770436", "end": "2020-04-03 22:02:22.463349", "item": {"device": "/dev/vg2/lv2", "path": "/redhat/brick2"}, "msg": "non-zero return code", "rc": 1, "start": "2020-04-03 22:02:21.692913", "stderr": "ValueError: File context for /redhat/brick2 already defined", "stderr_lines": ["ValueError: File context for /redhat/brick2 already defined"], "stdout": "", "stdout_lines": []}

Version-Release number of selected component:
------------------------------------------------------------
gdeploy-3.0.0-5.el8rhgs.noarch
glusterfs-server-6.0-31.el8rhgs.x86_64
glusterfs-fuse-6.0-31.el8rhgs.x86_64

How reproducible:
------------------------------------------------------------
Always

Steps to Reproduce:
------------------------------------------------------------
1.Ran gdeploy conf file for creating backend setup.It was successful.
2.Umount the bricks ,removed vg,pv and fstab enteries .
3.Ran again the same gdeploy conf file to setup backend.The gdeploy is unable to label the bricks.

##Conf file used :-

[hosts]
10.70.47.62
10.70.47.31
10.70.46.85
 
[backend-setup]
devices=/dev/sdb,/dev/sdc,/dev/sdd
vgs=vg1,vg2,vg3
pools=gfs_pool1,gfs_pool2,gfs_pool3
lvs=lv1,lv2,lv3
mountpoints=/redhat/brick1,/redhat/brick2,/redhat/brick3


Actual results:
-------------------------------------------------------------
The gdeploy conf file is failing with error.

failed: [10.70.47.31] (item={'device': '/dev/vg1/lv1', 'path': '/redhat/brick1'}) => {"ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "ansible_loop_var": "item", "changed": true, "cmd": "semanage fcontext -a -t glusterd_brick_t /redhat/brick1", "delta": "0:00:00.754554", "end": "2020-04-03 22:02:21.157970", "item": {"device": "/dev/vg1/lv1", "path": "/redhat/brick1"}, "msg": "non-zero return code", "rc": 1, "start": "2020-04-03 22:02:20.403416", "stderr": "ValueError: File context for /redhat/brick1 already defined", "stderr_lines": ["ValueError: File context for /redhat/brick1 already defined"], "stdout": "", "stdout_lines": []}

failed: [10.70.47.31] (item={'device': '/dev/vg2/lv2', 'path': '/redhat/brick2'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "semanage fcontext -a -t glusterd_brick_t /redhat/brick2", "delta": "0:00:00.770436", "end": "2020-04-03 22:02:22.463349", "item": {"device": "/dev/vg2/lv2", "path": "/redhat/brick2"}, "msg": "non-zero return code", "rc": 1, "start": "2020-04-03 22:02:21.692913", "stderr": "ValueError: File context for /redhat/brick2 already defined", "stderr_lines": ["ValueError: File context for /redhat/brick2 already defined"], "stdout": "", "stdout_lines": []}


Expected results:
---------------------------------------------------------------
The backend setup should be successful when the same conf file is ran.

Comment 2 Gobinda Das 2020-04-06 05:26:55 UTC
Mugdha,
 How are you doing cleanup? Are you using some script or manually? By looking the error just wondering whether cleanup was properly done or not?
Please share "lsblk" and "df -Th" output after cleanup.

Prajith Can you please check?

Comment 3 Mugdha Soni 2020-04-06 06:43:58 UTC
The clean up steps are mentioned in the attachment in comment#1

The steps are manually performed and mentioned below :-
1.Unmounted the bricks
2.VG's were removed
3.PV's were removed
4.All the entries were removed from etc/fstab
5.Brick enteries were removed .
----------------------------------------------------------------------------------------------------------

[root@dhcp47-62 /]# umount /redhat/brick*
[root@dhcp47-62 /]# lsblk
NAME                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                        8:0    0   16G  0 disk 
├─sda1                     8:1    0    1G  0 part /boot
└─sda2                     8:2    0   15G  0 part 
  ├─rhel_dhcp47--39-root 253:0    0 13.4G  0 lvm  /
  └─rhel_dhcp47--39-swap 253:1    0  1.6G  0 lvm  [SWAP]
sdb                        8:16   0   10G  0 disk 
├─vg1-gfs_pool1_tmeta    253:2    0   52M  0 lvm  
│ └─vg1-gfs_pool1-tpool  253:8    0   10G  0 lvm  
│   ├─vg1-gfs_pool1      253:11   0   10G  1 lvm  
│   └─vg1-lv1            253:12   0   10G  0 lvm  
└─vg1-gfs_pool1_tdata    253:5    0   10G  0 lvm  
  └─vg1-gfs_pool1-tpool  253:8    0   10G  0 lvm  
    ├─vg1-gfs_pool1      253:11   0   10G  1 lvm  
    └─vg1-lv1            253:12   0   10G  0 lvm  
sdc                        8:32   0   10G  0 disk 
├─vg2-gfs_pool2_tmeta    253:3    0   52M  0 lvm  
│ └─vg2-gfs_pool2-tpool  253:9    0   10G  0 lvm  
│   ├─vg2-gfs_pool2      253:13   0   10G  1 lvm  
│   └─vg2-lv2            253:14   0   10G  0 lvm  
└─vg2-gfs_pool2_tdata    253:6    0   10G  0 lvm  
  └─vg2-gfs_pool2-tpool  253:9    0   10G  0 lvm  
    ├─vg2-gfs_pool2      253:13   0   10G  1 lvm  
    └─vg2-lv2            253:14   0   10G  0 lvm  
sdd                        8:48   0   20G  0 disk 
├─vg3-gfs_pool3_tmeta    253:4    0  104M  0 lvm  
│ └─vg3-gfs_pool3-tpool  253:10   0 19.9G  0 lvm  
│   ├─vg3-gfs_pool3      253:15   0 19.9G  1 lvm  
│   └─vg3-lv3            253:16   0 19.9G  0 lvm  
└─vg3-gfs_pool3_tdata    253:7    0 19.9G  0 lvm  
  └─vg3-gfs_pool3-tpool  253:10   0 19.9G  0 lvm  
    ├─vg3-gfs_pool3      253:15   0 19.9G  1 lvm  
    └─vg3-lv3            253:16   0 19.9G  0 lvm  
sde                        8:64   0   20G  0 disk 
sr0                       11:0    1 1024M  0 rom  
[root@dhcp47-62 /]# yes | vgremove vg{1..3}
Do you really want to remove volume group "vg1" containing 2 logical volumes? [y/n]: Removing pool "gfs_pool1" will remove 1 dependent volume(s). Proceed? [y/n]: Do you really want to remove active logical volume vg1/lv1? [y/n]:   Logical volume "lv1" successfully removed
Do you really want to remove active logical volume vg1/gfs_pool1? [y/n]:   Logical volume "gfs_pool1" successfully removed
  Volume group "vg1" successfully removed
Do you really want to remove volume group "vg2" containing 2 logical volumes? [y/n]: Removing pool "gfs_pool2" will remove 1 dependent volume(s). Proceed? [y/n]: Do you really want to remove active logical volume vg2/lv2? [y/n]:   Logical volume "lv2" successfully removed
Do you really want to remove active logical volume vg2/gfs_pool2? [y/n]:   Logical volume "gfs_pool2" successfully removed
  Volume group "vg2" successfully removed
Do you really want to remove volume group "vg3" containing 2 logical volumes? [y/n]: Removing pool "gfs_pool3" will remove 1 dependent volume(s). Proceed? [y/n]: Do you really want to remove active logical volume vg3/lv3? [y/n]:   Logical volume "lv3" successfully removed
Do you really want to remove active logical volume vg3/gfs_pool3? [y/n]:   Logical volume "gfs_pool3" successfully removed
  Volume group "vg3" successfully removed
[root@dhcp47-62 /]# pvremove /dev/sd{b..d}
  Labels on physical volume "/dev/sdb" successfully wiped.
  Labels on physical volume "/dev/sdc" successfully wiped.
  Labels on physical volume "/dev/sdd" successfully wiped.
[root@dhcp47-62 /]# vi /etc/fstab
[root@dhcp47-62 /]# cd
[root@dhcp47-62 ~]# cd /redhat
[root@dhcp47-62 redhat]# ls
brick1  brick2  brick3
[root@dhcp47-62 redhat]# rm -rf brick1
[root@dhcp47-62 redhat]# rm -rf brick2
[root@dhcp47-62 redhat]# rm -rf brick3

The steps are manually performed and mentioned below :-
1.Umounted the bricks
2.VG's were removed
3. PV's were removed
4. All the entries were removed from etc/fstab
5. and the brick enteries were removed .

Comment 5 Prajith 2020-04-13 09:04:03 UTC
From RHEL8, the semanage does not overwrite , if we try to label the existing already labelled mountpoints. The work around of erasing existing labelled mount points before re-labeling them has been  implemented and merged. https://github.com/gluster/gdeploy/pull/551

Comment 6 Prajith 2020-04-14 01:56:23 UTC
Hi,
The packages can be downloaded from here:

Task Info: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=27905407
Build Info: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1165398

Comment 9 Mugdha Soni 2020-04-16 06:58:25 UTC
## Tested with the following:-

1.gdeploy-3.0.0-6.el8rhgs.noarch
2.ansible-2.9.6-1.el8ae.noarch
3.glusterfs-server-6.0-32.el8rhgs.x86_64

-----------------------------------------------------------------------------------------------------------
## Conf file used for backend setup :-

[hosts]
10.70.47.62
10.70.47.31
10.70.46.85
 
[backend-setup]
devices=/dev/sdb,/dev/sdc,/dev/sdd
vgs=vg1,vg2,vg3
pools=gfs_pool1,gfs_pool2,gfs_pool3
lvs=lv1,lv2,lv3
mountpoints=/mount/brick1,/mount/brick2,/mount/brick3

------------------------------------------------------------------------------------------------------------

## Steps performed for verification :-

1. Created a backend setup using the conf file mentioned above . 
2. Then tested the following scenarios:-
      
       (a) Manually unmounted bricks , removed vg ,pvs,fstab enteries and removed brick directories and re ran the conf file .
       (b) Manually unmounted bricks, removed vgs ,pvs, fstab enteries and reran the same conf file ran before.
--------------------------------------------------------------------------------------------------------------

[root@dhcp47-62 gdeploy]# gdeploy -c backend.conf 

PLAY [gluster_servers] **********************************************************************************

TASK [Clean up filesystem signature] ********************************************************************
skipping: [10.70.47.62] => (item=/dev/sdb) 
skipping: [10.70.47.62] => (item=/dev/sdc) 
skipping: [10.70.47.62] => (item=/dev/sdd) 
skipping: [10.70.47.31] => (item=/dev/sdb) 
skipping: [10.70.47.31] => (item=/dev/sdc) 
skipping: [10.70.47.31] => (item=/dev/sdd) 
skipping: [10.70.46.85] => (item=/dev/sdb) 
skipping: [10.70.46.85] => (item=/dev/sdc) 
skipping: [10.70.46.85] => (item=/dev/sdd) 

TASK [Create Physical Volume] ***************************************************************************
changed: [10.70.47.62] => (item=/dev/sdb)
changed: [10.70.47.31] => (item=/dev/sdb)
changed: [10.70.46.85] => (item=/dev/sdb)
changed: [10.70.47.62] => (item=/dev/sdc)
changed: [10.70.46.85] => (item=/dev/sdc)
changed: [10.70.47.31] => (item=/dev/sdc)
changed: [10.70.47.62] => (item=/dev/sdd)
changed: [10.70.46.85] => (item=/dev/sdd)
changed: [10.70.47.31] => (item=/dev/sdd)

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
10.70.47.31                : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
10.70.47.62                : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   


PLAY [gluster_servers] **********************************************************************************

TASK [Create volume group on the disks] *****************************************************************
changed: [10.70.47.62] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.47.31] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'brick': '/dev/sdd', 'vg': 'vg3'})
[WARNING]: The value 0 (type int) in a string field was converted to '0' (type string). If this does not
look like what you expect, quote the entire value to ensure it does not change.
[WARNING]: The value 256 (type int) in a string field was converted to '256' (type string). If this does
not look like what you expect, quote the entire value to ensure it does not change.
changed: [10.70.47.31] => (item={'brick': '/dev/sdd', 'vg': 'vg3'})
changed: [10.70.46.85] => (item={'brick': '/dev/sdd', 'vg': 'vg3'})

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.31                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.62                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] **********************************************************************************

TASK [Create logical volume named metadata] *************************************************************
changed: [10.70.47.62] => (item=vg1)
changed: [10.70.47.31] => (item=vg1)
changed: [10.70.46.85] => (item=vg1)
changed: [10.70.47.31] => (item=vg2)
changed: [10.70.47.62] => (item=vg2)
changed: [10.70.46.85] => (item=vg2)
changed: [10.70.46.85] => (item=vg3)
[WARNING]: The value 0 (type int) in a string field was converted to '0' (type string). If this does not
look like what you expect, quote the entire value to ensure it does not change.
changed: [10.70.47.31] => (item=vg3)
changed: [10.70.47.62] => (item=vg3)

TASK [create data LV that has a size which is a multiple of stripe width] *******************************
changed: [10.70.46.85] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.62] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.31] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.46.85] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.31] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.62] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})

TASK [Convert the logical volume] ***********************************************************************
changed: [10.70.46.85] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.31] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.62] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.46.85] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.62] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})

TASK [create stripe-aligned thin volume] ****************************************************************
changed: [10.70.47.62] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.31] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.46.85] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.31] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.62] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})

TASK [Change the attributes of the logical volume] ******************************************************
changed: [10.70.46.85] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.31] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.62] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.46.85] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.31] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.62] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.47.31] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})
changed: [10.70.46.85] => (item={'lv': 'lv3', 'pool': 'gfs_pool3', 'vg': 'vg3'})

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.31                : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.62                : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] **********************************************************************************

TASK [Create an xfs filesystem] *************************************************************************
changed: [10.70.47.62] => (item=/dev/vg1/lv1)
changed: [10.70.46.85] => (item=/dev/vg1/lv1)
changed: [10.70.47.31] => (item=/dev/vg1/lv1)
changed: [10.70.46.85] => (item=/dev/vg2/lv2)
changed: [10.70.47.31] => (item=/dev/vg2/lv2)
changed: [10.70.47.62] => (item=/dev/vg2/lv2)
changed: [10.70.47.62] => (item=/dev/vg3/lv3)
changed: [10.70.46.85] => (item=/dev/vg3/lv3)
changed: [10.70.47.31] => (item=/dev/vg3/lv3)

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.31                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.62                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] **********************************************************************************

TASK [Create the mountpoints, skips if present] *********************************************************
ok: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
ok: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
ok: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
ok: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
ok: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
ok: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
ok: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
ok: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
ok: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})

TASK [Set mount options for VDO] ************************************************************************
skipping: [10.70.47.62]
skipping: [10.70.47.31]
skipping: [10.70.46.85]

TASK [Mount the vdo disks (if any)] *********************************************************************
skipping: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'}) 
skipping: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'}) 
skipping: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'}) 
skipping: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'}) 
skipping: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'}) 
skipping: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'}) 
skipping: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'}) 
skipping: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'}) 
skipping: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'}) 

TASK [Mount the disks (non-vdo)] ************************************************************************
changed: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
10.70.47.31                : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
10.70.47.62                : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   


PLAY [gluster_servers] **********************************************************************************

TASK [Deleting existing SELinux label if any] ***********************************************************
changed: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})

TASK [Set SELinux labels on the bricks] *****************************************************************
changed: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})

TASK [Restore the SELinux context] **********************************************************************
changed: [10.70.46.85] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.62] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.47.31] => (item={'device': '/dev/vg1/lv1', 'path': '/mount/brick1'})
changed: [10.70.46.85] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.31] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.47.62] => (item={'device': '/dev/vg2/lv2', 'path': '/mount/brick2'})
changed: [10.70.46.85] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.31] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})
changed: [10.70.47.62] => (item={'device': '/dev/vg3/lv3', 'path': '/mount/brick3'})

PLAY RECAP **********************************************************************************************
10.70.46.85                : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.31                : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.62                : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Error: No sections found in config file. Exiting!



In both the cases gdeploy conf file ran correctly and created backend setup . User can even see a task where if any existing SE linux labels are present are being deleted.
Hence , Moving the bug to verified state..

Comment 11 errata-xmlrpc 2020-06-16 05:56:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2577


Note You need to log in before you can comment on or make changes to this bug.