Bug 1833421 - [RFE] Avoid to set /dev/sdb as default device if the device doesn't exist or validate the field.
Summary: [RFE] Avoid to set /dev/sdb as default device if the device doesn't exist or ...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: cockpit-ovirt
Classification: oVirt
Component: gluster-ansible
Version: 0.14.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Parth Dhanjal
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-08 16:04 UTC by Sandro Bonazzola
Modified: 2020-09-09 13:20 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-07 07:17:04 UTC
oVirt Team: Gluster
Embargoed:
sbonazzo: planning_ack?
sbonazzo: devel_ack?
sbonazzo: testing_ack?


Attachments (Terms of Use)
gluster deployment log (2.62 KB, application/gzip)
2020-05-08 16:04 UTC, Sandro Bonazzola
no flags Details

Description Sandro Bonazzola 2020-05-08 16:04:47 UTC
Created attachment 1686540 [details]
gluster deployment log

Description of problem:
While following hyperconverged guide for single host at:

The deployment fails at last ansible task:
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
failed: [node0-storage.lab] (item={'vgname': 'gluster_vg_sdb', 'lvname': 'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume gr
oup \"gluster_vg_sdb\" not found\n  Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_v
g_sdb does not exist.", "rc": 5}

Log attached.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Install https://resources.ovirt.org/pub/ovirt-4.4-pre/iso/ovirt-node-ng-installer/4.4.0-2020050620/el8/ovirt-node-ng-installer-4.4.0-2020050620.el8.iso on a VM with nested virtualization enabled.
2. Follow: https://ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html

Actual results: deploy fails


Expected results: deploy succeed


Additional info: I still have the VM up and running, I'll keep it till you can look into it.

Comment 1 Sandro Bonazzola 2020-05-08 16:08:31 UTC
Forgot to mention the version tested: ovirt-node-ng-installer-4.4.0-2020050620.el8.iso
which includes:

# rpm -qa |grep gluster|sort
gluster-ansible-cluster-1.0.0-1.el8.noarch
gluster-ansible-features-1.0.5-6.el8.noarch
gluster-ansible-infra-1.0.4-8.el8.noarch
gluster-ansible-maintenance-1.0.1-2.el8.noarch
gluster-ansible-repositories-1.0.1-2.el8.noarch
gluster-ansible-roles-1.0.5-10.el8.noarch
glusterfs-7.5-1.el8.x86_64
glusterfs-api-7.5-1.el8.x86_64
glusterfs-cli-7.5-1.el8.x86_64
glusterfs-client-xlators-7.5-1.el8.x86_64
glusterfs-events-7.5-1.el8.x86_64
glusterfs-fuse-7.5-1.el8.x86_64
glusterfs-geo-replication-7.5-1.el8.x86_64
glusterfs-libs-7.5-1.el8.x86_64
glusterfs-rdma-7.5-1.el8.x86_64
glusterfs-server-7.5-1.el8.x86_64
libvirt-daemon-driver-storage-gluster-5.6.0-10.el8.x86_64
python3-gluster-7.5-1.el8.x86_64
qemu-kvm-block-gluster-4.1.0-23.el8.1.x86_64
vdsm-gluster-4.40.14-1.el8.x86_64

# rpm -qa |grep cockpit |sort
cockpit-217-1.el8.x86_64
cockpit-bridge-217-1.el8.x86_64
cockpit-dashboard-217-1.el8.noarch
cockpit-ovirt-dashboard-0.14.6-1.el8.noarch
cockpit-storaged-217-1.el8.noarch
cockpit-system-217-1.el8.noarch
cockpit-ws-217-1.el8.x86_64

# rpm -qa |grep vdo |sort
kmod-kvdo-6.2.1.138-58.el8_1.x86_64
libblockdev-vdo-2.19-9.el8.x86_64
vdo-6.2.1.134-11.el8.x86_64

Comment 2 Sandro Bonazzola 2020-05-08 16:09:09 UTC
Forgot also to mention that I enabled the VDO option for the deployment.

Comment 4 Gobinda Das 2020-05-09 04:40:33 UTC
Hi Sandro,
 I looked the node and found that the device "/dev/sdb" which is used for deployment does not exist.

[root@node0 ~]# lsblk
NAME                                                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                                                       11:0    1  1.4G  0 rom  
vda                                                      252:0    0   60G  0 disk 
├─vda1                                                   252:1    0    1G  0 part /boot
└─vda2                                                   252:2    0   59G  0 part 
  ├─onn_node0-pool00_tmeta                               253:0    0    1G  0 lvm  
  │ └─onn_node0-pool00-tpool                             253:2    0 87.2G  0 lvm  
  │   ├─onn_node0-ovirt--node--ng--4.4.0--0.20200506.0+1 253:3    0 50.2G  0 lvm  /
  │   ├─onn_node0-pool00                                 253:5    0 87.2G  0 lvm  
  │   ├─onn_node0-var_log_audit                          253:6    0    2G  0 lvm  /var/log/audit
  │   ├─onn_node0-var_log                                253:7    0    8G  0 lvm  /var/log
  │   ├─onn_node0-var_crash                              253:8    0   10G  0 lvm  /var/crash
  │   ├─onn_node0-var                                    253:9    0   15G  0 lvm  /var
  │   ├─onn_node0-tmp                                    253:10   0    1G  0 lvm  /tmp
  │   └─onn_node0-home                                   253:11   0    1G  0 lvm  /home
  ├─onn_node0-pool00_tdata                               253:1    0 87.2G  0 lvm  
  │ └─onn_node0-pool00-tpool                             253:2    0 87.2G  0 lvm  
  │   ├─onn_node0-ovirt--node--ng--4.4.0--0.20200506.0+1 253:3    0 50.2G  0 lvm  /
  │   ├─onn_node0-pool00                                 253:5    0 87.2G  0 lvm  
  │   ├─onn_node0-var_log_audit                          253:6    0    2G  0 lvm  /var/log/audit
  │   ├─onn_node0-var_log                                253:7    0    8G  0 lvm  /var/log
  │   ├─onn_node0-var_crash                              253:8    0   10G  0 lvm  /var/crash
  │   ├─onn_node0-var                                    253:9    0   15G  0 lvm  /var
  │   ├─onn_node0-tmp                                    253:10   0    1G  0 lvm  /tmp
  │   └─onn_node0-home                                   253:11   0    1G  0 lvm  /home
  └─onn_node0-swap                                       253:4    0  7.9G  0 lvm  [SWAP]
vdb                                                      252:16   0   60G  0 disk 
└─vdb1                                                   252:17   0   60G  0 part 
  └─onn_node0-pool00_tdata                               253:1    0 87.2G  0 lvm  
    └─onn_node0-pool00-tpool                             253:2    0 87.2G  0 lvm  
      ├─onn_node0-ovirt--node--ng--4.4.0--0.20200506.0+1 253:3    0 50.2G  0 lvm  /
      ├─onn_node0-pool00                                 253:5    0 87.2G  0 lvm  
      ├─onn_node0-var_log_audit                          253:6    0    2G  0 lvm  /var/log/audit
      ├─onn_node0-var_log                                253:7    0    8G  0 lvm  /var/log
      ├─onn_node0-var_crash                              253:8    0   10G  0 lvm  /var/crash
      ├─onn_node0-var                                    253:9    0   15G  0 lvm  /var
      ├─onn_node0-tmp                                    253:10   0    1G  0 lvm  /tmp
      └─onn_node0-home                                   253:11   0    1G  0 lvm  /home


In inventory you have
gluster_infra_vdo:
        - name: vdo_sdb
          device: /dev/sdb -> This does not exist in node. 
          slabsize: 32G
          logicalsize: 11000G
          blockmapcachesize: 128M
          emulate512: 'off'
          writepolicy: auto
          maxDiscardSize: 16M

Please give correct unused device and try.Let me know if any issue after that.

Comment 5 Sandro Bonazzola 2020-05-10 22:01:31 UTC
(In reply to Gobinda Das from comment #4)
> Hi Sandro,
>  I looked the node and found that the device "/dev/sdb" which is used for
> deployment does not exist.

I don't remember being asked about name of the devices. The host is a virtual host, disks are on /dev/vda and on /dev/vdb.
I'll try again the deployment flow paying more attention to the questions in the wizard.
It would probably help adding a validation step on device existence if the question is already there.

Comment 6 Sandro Bonazzola 2020-05-11 16:35:17 UTC
I found the point where sdb was mentioned and indeed you're right, it was set to sdb also on the previous run.
Changing this to an RFE for detecting if /dev/sdb exists before setting it as default in the wizard.
Better to have nothing here if /dev/sdb and failing UI validation than failing at the end of the last step 15 minutes later.

Comment 7 Gobinda Das 2020-09-07 07:17:04 UTC
Inflight check during that state is not possible because there will be no ansible inventory file generated.Also even mistakenly user provides wrong disk then in very first time of deployment it will fail with meaningful message. So I don't think it's a high severity bug.
Closing this for now, will reopen if more customers will ask for this.


Note You need to log in before you can comment on or make changes to this bug.