Bug 1571580 - Add a preflight check for the presence of VDO volume on the disks
Summary: Add a preflight check for the presence of VDO volume on the disks
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1571586
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-25 07:09 UTC by SATHEESARAN
Modified: 2019-03-04 06:39 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1571586 (view as bug list)
Environment:
Last Closed: 2019-02-18 05:53:22 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-04-25 07:09:33 UTC
Description of problem:
-----------------------
It happened so, when I tried reinstallation, the disks already had VDO signatures on them, and the installation failed with reason 'UUID already exists'.

It would be helpful to customers, to provide a check on the disks ahead and provide information to the user, regarding the problem


Version-Release number of selected component (if applicable):
-------------------------------------------------------------
cockpit-ovirt-dashboard-0.11.22

How reproducible:
-------------------
Always

Steps to Reproduce:
-------------------
1. Create a VDO volume ( on /dev/sdb )
2. Go for reinstallation of the host, boot disk ( /dev/sda ) reformatted and installed with fresh RHVH
3. Try to create vdo volume again on /dev/sdb

Actual results:
---------------
As there exists the VDO volume already on /dev/sdb, new VDO volume creation fails

Expected results:
------------------
Perform a preflight check  and fail deployment initially with reason

Comment 1 SATHEESARAN 2018-04-25 07:12:27 UTC





[root@ ~]# lsblk
NAME                                                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                 8:0    0   931G  0 disk 
├─sda1                                                              8:1    0     1G  0 part /boot
└─sda2                                                              8:2    0   930G  0 part 
  ├─rhvh_rhsqa--grafton11--nic2-pool00_tmeta                      253:0    0     1G  0 lvm  
  │ └─rhvh_rhsqa--grafton11--nic2-pool00-tpool                    253:2    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-rhvh--4.2.2.1--0.20180420.0+1 253:3    0 798.2G  0 lvm  /
  │   ├─rhvh_rhsqa--grafton11--nic2-pool00                        253:5    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log                       253:7    0     8G  0 lvm  /var/log
  │   ├─rhvh_rhsqa--grafton11--nic2-var                           253:8    0    15G  0 lvm  /var
  │   ├─rhvh_rhsqa--grafton11--nic2-tmp                           253:9    0     1G  0 lvm  /tmp
  │   ├─rhvh_rhsqa--grafton11--nic2-home                          253:10   0     1G  0 lvm  /home
  │   ├─rhvh_rhsqa--grafton11--nic2-root                          253:11   0 798.2G  0 lvm  
  │   └─rhvh_rhsqa--grafton11--nic2-var_crash                     253:12   0    10G  0 lvm  /var/crash
  ├─rhvh_rhsqa--grafton11--nic2-pool00_tdata                      253:1    0 825.2G  0 lvm  
  │ └─rhvh_rhsqa--grafton11--nic2-pool00-tpool                    253:2    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-rhvh--4.2.2.1--0.20180420.0+1 253:3    0 798.2G  0 lvm  /
  │   ├─rhvh_rhsqa--grafton11--nic2-pool00                        253:5    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log                       253:7    0     8G  0 lvm  /var/log
  │   ├─rhvh_rhsqa--grafton11--nic2-var                           253:8    0    15G  0 lvm  /var
  │   ├─rhvh_rhsqa--grafton11--nic2-tmp                           253:9    0     1G  0 lvm  /tmp
  │   ├─rhvh_rhsqa--grafton11--nic2-home                          253:10   0     1G  0 lvm  /home
  │   ├─rhvh_rhsqa--grafton11--nic2-root                          253:11   0 798.2G  0 lvm  
  │   └─rhvh_rhsqa--grafton11--nic2-var_crash                     253:12   0    10G  0 lvm  /var/crash
  └─rhvh_rhsqa--grafton11--nic2-swap                              253:4    0     4G  0 lvm  [SWAP]
sdb                                                                 8:16   0  18.2T  0 disk 
sdc                                                                 8:32   0 223.1G  0 disk 
sdd                                                                 8:48   0   931G  0 disk 
sde                                                                 8:64   0   1.8T  0 disk 
sdf                                                                 8:80   0   1.8T  0 disk 


[root@ ~]# blkid /dev/sdb
/dev/sdb: UUID="05e20c8a-6128-4f2c-aa8c-96b0512b0d2e" TYPE="vdo" 


'lsblk' command doesn't listed the existence of VDO volume on /dev/sdb, but there was a VDO signature on this device.

When run gdeploy to create VDO volume on this device, the error is observed

<snip>

TASK [Create VDO with specified size] ******************************************
failed: [10.70.45.33] (item={u'disk': u'/dev/sdb', u'logicalsize': u'164840G', u'name': u'vdo_sdb'}) => {"changed": false, "err": "vdo: ERROR -   Couldn't find device with uuid ZhRR6r-yR0L-OfGr-9LzK-jEC3-Ww1j-HHioKb.\n", "item": {"disk": "/dev/sdb", "logicalsize": "164840G", "name": "vdo_sdb"}, "msg": "Creating VDO vdo_sdb failed.", "rc": 2}
	to retry, use: --limit @/tmp/tmpY6Co3_/vdo-create.retry

PLAY RECAP *********************************************************************
10.70.45.33                : ok=0    changed=0    unreachable=0    failed=1   
</snip>

Comment 2 Sahina Bose 2019-02-18 05:53:22 UTC
Closing as per dependent bz status..this is currently handled by ansible vdo module


Note You need to log in before you can comment on or make changes to this bug.