Bug 1571586 - Add a preflight check for the presence of VDO volume on the disks
Summary: Add a preflight check for the presence of VDO volume on the disks
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1571580
TreeView+ depends on / blocked
 
Reported: 2018-04-25 07:31 UTC by SATHEESARAN
Modified: 2018-10-22 07:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1571580
Environment:
Last Closed: 2018-10-22 07:11:15 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description SATHEESARAN 2018-04-25 07:31:46 UTC
+++ This bug was initially created as a clone of Bug #1571580 +++

Description of problem:
-----------------------
It happened so, when I tried reinstallation, the disks already had VDO signatures on them, and the installation failed with reason 'UUID already exists'.

It would be helpful to customers, to provide a check on the disks ahead and provide information to the user, regarding the problem


Version-Release number of selected component (if applicable):
-------------------------------------------------------------
cockpit-ovirt-dashboard-0.11.22

How reproducible:
-------------------
Always

Steps to Reproduce:
-------------------
1. Create a VDO volume ( on /dev/sdb )
2. Go for reinstallation of the host, boot disk ( /dev/sda ) reformatted and installed with fresh RHVH
3. Try to create vdo volume again on /dev/sdb

Actual results:
---------------
As there exists the VDO volume already on /dev/sdb, new VDO volume creation fails

Expected results:
------------------
Perform a preflight check  and fail deployment initially with reason

--- Additional comment from SATHEESARAN on 2018-04-25 03:12:27 EDT ---







[root@ ~]# lsblk
NAME                                                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                 8:0    0   931G  0 disk 
├─sda1                                                              8:1    0     1G  0 part /boot
└─sda2                                                              8:2    0   930G  0 part 
  ├─rhvh_rhsqa--grafton11--nic2-pool00_tmeta                      253:0    0     1G  0 lvm  
  │ └─rhvh_rhsqa--grafton11--nic2-pool00-tpool                    253:2    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-rhvh--4.2.2.1--0.20180420.0+1 253:3    0 798.2G  0 lvm  /
  │   ├─rhvh_rhsqa--grafton11--nic2-pool00                        253:5    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log                       253:7    0     8G  0 lvm  /var/log
  │   ├─rhvh_rhsqa--grafton11--nic2-var                           253:8    0    15G  0 lvm  /var
  │   ├─rhvh_rhsqa--grafton11--nic2-tmp                           253:9    0     1G  0 lvm  /tmp
  │   ├─rhvh_rhsqa--grafton11--nic2-home                          253:10   0     1G  0 lvm  /home
  │   ├─rhvh_rhsqa--grafton11--nic2-root                          253:11   0 798.2G  0 lvm  
  │   └─rhvh_rhsqa--grafton11--nic2-var_crash                     253:12   0    10G  0 lvm  /var/crash
  ├─rhvh_rhsqa--grafton11--nic2-pool00_tdata                      253:1    0 825.2G  0 lvm  
  │ └─rhvh_rhsqa--grafton11--nic2-pool00-tpool                    253:2    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-rhvh--4.2.2.1--0.20180420.0+1 253:3    0 798.2G  0 lvm  /
  │   ├─rhvh_rhsqa--grafton11--nic2-pool00                        253:5    0 825.2G  0 lvm  
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
  │   ├─rhvh_rhsqa--grafton11--nic2-var_log                       253:7    0     8G  0 lvm  /var/log
  │   ├─rhvh_rhsqa--grafton11--nic2-var                           253:8    0    15G  0 lvm  /var
  │   ├─rhvh_rhsqa--grafton11--nic2-tmp                           253:9    0     1G  0 lvm  /tmp
  │   ├─rhvh_rhsqa--grafton11--nic2-home                          253:10   0     1G  0 lvm  /home
  │   ├─rhvh_rhsqa--grafton11--nic2-root                          253:11   0 798.2G  0 lvm  
  │   └─rhvh_rhsqa--grafton11--nic2-var_crash                     253:12   0    10G  0 lvm  /var/crash
  └─rhvh_rhsqa--grafton11--nic2-swap                              253:4    0     4G  0 lvm  [SWAP]
sdb                                                                 8:16   0  18.2T  0 disk 
sdc                                                                 8:32   0 223.1G  0 disk 
sdd                                                                 8:48   0   931G  0 disk 
sde                                                                 8:64   0   1.8T  0 disk 
sdf                                                                 8:80   0   1.8T  0 disk 


[root@ ~]# blkid /dev/sdb
/dev/sdb: UUID="05e20c8a-6128-4f2c-aa8c-96b0512b0d2e" TYPE="vdo" 


'lsblk' command doesn't listed the existence of VDO volume on /dev/sdb, but there was a VDO signature on this device.

When run gdeploy to create VDO volume on this device, the error is observed

<snip>

TASK [Create VDO with specified size] ******************************************
failed: [10.70.45.33] (item={u'disk': u'/dev/sdb', u'logicalsize': u'164840G', u'name': u'vdo_sdb'}) => {"changed": false, "err": "vdo: ERROR -   Couldn't find device with uuid ZhRR6r-yR0L-OfGr-9LzK-jEC3-Ww1j-HHioKb.\n", "item": {"disk": "/dev/sdb", "logicalsize": "164840G", "name": "vdo_sdb"}, "msg": "Creating VDO vdo_sdb failed.", "rc": 2}
	to retry, use: --limit @/tmp/tmpY6Co3_/vdo-create.retry

PLAY RECAP *********************************************************************
10.70.45.33                : ok=0    changed=0    unreachable=0    failed=1   
</snip>

Comment 2 Sachidananda Urs 2018-05-11 10:30:46 UTC
sas,

Comment 4 Sachidananda Urs 2018-10-22 07:11:15 UTC
sas, this situation is handled by vdo module now, if vdo is already created then vdo create does not throw error, the action is idempotent.
I will be closing this bug since this would not be an issue in gluster-ansible. Feel free to reopen the bug if see the issue again.


Note You need to log in before you can comment on or make changes to this bug.