Bug 1257251 - Existing Logical volumes deleted after resize of LUN
Existing Logical volumes deleted after resize of LUN
Status: CLOSED NEXTRELEASE
Product: oVirt
Classification: Community
Component: vdsm (Show other bugs)
3.5
x86_64 Linux
unspecified Severity medium
: m1
: 3.6.0
Assigned To: Fred Rolland
Gil Klein
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-26 11:06 EDT by info
Modified: 2016-03-10 01:13 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-20 08:26:57 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
vdsm-Logfile ovirt0 (1.24 MB, text/plain)
2015-09-03 04:05 EDT, info
no flags Details
vdsm-Logfile ovirt1 (4.47 MB, text/plain)
2015-09-03 04:06 EDT, info
no flags Details

  None (edit)
Description info 2015-08-26 11:06:27 EDT
Description of problem:
Resizing LUN of a storage domain will cause a loss of existing logical volume.


Version-Release number of selected component (if applicable):
ovrt  nodes and engine run on CentOS 7.1
[oVirt shell (connected)]# info

backend version: 3.5
sdk version    : 3.5.2.1
cli version    : 3.5.0.6
python version : 2.7.5.final.0


How reproducible:
Steps to Reproduce:

1) SD size 374GB and  370GB free
 -> No harddisk created on this SD
LUN-id: 360060e80101e2500058be22000000bbc
domain name: SD name
Typ: Data Fibre Channel
SD is visible/accessable on both ovirt nodes

2) create harddisk with 25 GB on vm yoko 
alias: yoko_testdisk0
image id: 5e8ad8d5-f60d-4742-921b-32d6e42305cd

3) SD free size  345 GB -> correct

4) boot vm "yoko"  

5) resize LUN with storage tool from 375Gb to 400GB

6) execute following command on booth ovirt nodes
for letter in {a..z} ; do
echo 1 > /sys/block/sd${letter}/device/rescan
done


7) resize devices  on both nodes
multipathd resize map 360060e80101e2500058be22000000bbc


8) Now the command multipath -ll shows correct size on both hosts
- lesn-ovirt0
360060e80101e2500058be22000000bbc dm-1 HITACHI ,DF600F          
size=400G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| |- 2:0:1:1 sdf 8:80  active ready  running
| `- 3:0:1:1 sdl 8:176 active ready  running
`-+- policy='service-time 0' prio=0 status=enabled
  |- 2:0:0:1 sdd 8:48  active ready  running
  `- 3:0:0:1 sdj 8:144 active ready  running

- lesn-ovirt1:
360060e80101e2500058be22000000bbc dm-1 HITACHI ,DF600F          
size=400G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| |- 1:0:1:1 sdd 8:48  active ready  running
| `- 3:0:1:1 sdj 8:144 active ready  running
`-+- policy='service-time 0' prio=0 status=enabled
  |- 1:0:0:1 sdc 8:32  active ready  running
  `- 3:0:0:1 sdi 8:128 active ready  running


9) At this time SD size  is still at 375GB.

10) resize PV on SPM (lesn-ovirt1) 
[root@lesn-ovirt1]# pvresize /dev/mapper/360060e80101e2500058be22000000bbc
  Physical volume "/dev/mapper/360060e80101e2500058be22000000bbc" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

  
11) lesn-ovirt1 shows new size, but the existing logical volumes are lost.
    
  [root@lesn-ovirt1]# pvresize /dev/mapper/360060e80101e2500058be22000000bbc
  Physical volume "/dev/mapper/360060e80101e2500058be22000000bbc" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@lesn-ovirt1]# pvs
  PV                                             VG                                   Fmt  Attr PSize   PFree  
  /dev/mapper/360060e80101e2500058be22000000bbc  7621463a-d68c-4988-852d-5dc484f011a5 lvm2 a--  399.62g 395.50g
  /dev/mapper/360060e80101e2510058be2210000000c2 space    

12) after a minute ovirt shows same size -> 100% free

    
13) stopping vm yoko and starting again will fail:
Thread-681773::ERROR::2015-08-26 16:33:20,678::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Logical volume does not exist: ('7621463a-d68c-4988-852d-5dc484f011a5/5e8ad8d5-f60d-4742-921b-32d6e42305cd',)", 'code': 610}}



Actual results:
SD size has been changed correctly, but existing logical volumes are deleted or disappeared.

Expected results:
Existing logical volumes should not be deleted.


Additional info:
logfiles are available
Comment 1 info 2015-09-03 04:05:24 EDT
Created attachment 1069649 [details]
vdsm-Logfile ovirt0
Comment 2 info 2015-09-03 04:06:15 EDT
Created attachment 1069650 [details]
vdsm-Logfile ovirt1
Comment 3 Fred Rolland 2015-09-09 04:06:57 EDT
Hi,

Changing storage from outside the VDSM is not supported.
I suggest to restart the VDSM on both hosts and see if it solves the issue.

Also , you can run pvscan --cache and see in lvs output if you see your logical volume.

Please update with your findings.

Note that in 3.6 refresh LUN size will be supported via APIs.
http://www.ovirt.org/Features/LUN_Resize
Comment 4 info 2015-09-11 08:33:09 EDT
Restarting VDSM didnt help, because the existing volume is overwritten by the new one.
No chance to recover the deleted volume.

As you sad its not possible to resize the Lun in version 3.5?

As workaround i add an additional LUN to get more space and waiting for 3.6
Comment 5 Allon Mureinik 2015-09-20 08:26:57 EDT
Unfortunately after the lun has been destroyed there's nothing we can do.
This will be fixed in 3.6 as part of bug 609689.

Note You need to log in before you can comment on or make changes to this bug.