Bug 1530043 - Extending a block(ISCSI) raw disk attached to a VM with a 'virtio' interface does not change lv size in SPM
Summary: Extending a block(ISCSI) raw disk attached to a VM with a 'virtio' interface ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.1
: ---
Assignee: Daniel Erez
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-01 13:33 UTC by Avihai
Modified: 2018-02-12 11:49 UTC (History)
6 users (show)

Fixed In Version: ovirt-engine-4.2.1.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-12 11:49:48 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)
engine , vdsm logs (1.32 MB, application/x-gzip)
2018-01-01 13:33 UTC, Avihai
no flags Details
engine , vdsm logs (3.76 MB, application/x-gzip)
2018-01-03 05:04 UTC, Avihai
no flags Details
image download logs (225.97 KB, application/x-gzip)
2018-01-03 05:07 UTC, Avihai
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 86028 0 master MERGED core: update diskImageDynamic after ExtendImageSize 2018-01-08 15:40:40 UTC

Description Avihai 2018-01-01 13:33:36 UTC
Created attachment 1375188 [details]
engine , vdsm logs

Description of problem:
Extending a block(ISCSI) raw disk attached to a VM with a 'virtio' interface , lv size at SPM remain 1G.

Issue does not occur when with 'virtioscsi' interface .
Issue does not occur at 4.1.7.5-0.1.el7 , thus this is a regression bug.

This was reveled while I was downloading 2 disks (2 raw disk , 1 with virtio interface & the other with virtscsi interface ) after extending them both to 2G and only the virtscsi downloaded 2G , the virtio disk downloaded only 1G as it was not extended to 2G as expected.


Version-Release number of selected component (if applicable):
Engine:
ovirt-engine-4.2.0.2-0.1.el7.noarch

VDSM:
4.20.9.3-1

How reproducible:
100%


Steps to Reproduce(either in UI/RestAPI):
1.Create 1G , format: raw
2.Create VM & attach the disk with interface:virtio .
3.Edit the disk & extend by 1G & wait until disk state is 'OK' -
4.Deactivate the disk & remove it (without deleting it)

Actual results:
Checking the lv size at SPM the lv size is 1G .

Looking at SPM I see lvm size is 1G instead of 2G:
lvs | grep 0e547a89-0a2d-4db1-990a-1c2b719e1cc5
  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  0e547a89-0a2d-4db1-990a-1c2b719e1cc5 84b67981-52f7-4f4c-9ff6-48693e1ac375 -wi-------   1.00g   

Expected results:
Checking the lv size at SPM the lv size should be 2G .


Additional info:qq

Engine:
2017-12-31 17:21:36,324+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-9331) [] EVENT_ID: USER_EXTEND_DISK_SIZE_SUCCESS(371), Size of the d
isk 'disk_virtioraw_3117191084' was successfully updated to 2 GB by admin@internal-authz.

from SPM vdsm.log:
2017-12-31 17:21:24,626+0200 INFO  (jsonrpc/5) [api.virt] START diskSizeExtend(driveSpecs={'poolID': 'f0b8b007-228d-4038-a22e-424bd6b4237d', 'volumeID': '0e547a89-0a2d-4db1-990a-1c2b719e1cc5', 'domainID': '84b67981-52f7-4f4c-9ff6-48693e1ac375', 'imageID': '7cb2777b-e317-4352-a581-6e2801fbbe57'}, newSize='2147483648') from=::ffff:10.35.162.7,37738 (api:46)
2017-12-31 17:21:24,627+0200 DEBUG (jsonrpc/5) [storage.TaskManager.Task] (Task='2b73ae95-0de2-41cb-b895-aeec42261772') moving from state init -> state preparing (task:602)
2017-12-31 17:21:24,628+0200 INFO  (jsonrpc/5) [vdsm.api] START updateVolumeSize(spUUID='f0b8b007-228d-4038-a22e-424bd6b4237d', sdUUID='84b67981-52f7-4f4c-9ff6-48693e1ac375', imgUUID='7cb2777b-e317-4352-a581-6e2801fbbe57', volUUID='0e547a89-0a2d-4db1-990a-1c2b719e1cc5', newSize='2147483648') from=::ffff:10.35.162.7,37738, task_id=2b73ae95-0de2-41cb-b895-aeec42261772 (api:46)
2017-12-31 17:21:24,629+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/dd iflag=direct skip=8 bs=512 if=/dev/84b67981-52f7-4f4c-9ff6-48693e1ac375/metadata count=1 (cwd None) (commands:65)
2017-12-31 17:21:24,643+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00124549 s, 411 kB/s\n'; <rc> = 0 (commands:86)
2017-12-31 17:21:24,644+0200 DEBUG (jsonrpc/5) [storage.Misc] err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00124549 s, 411 kB/s'], size: 512 (misc:124)
2017-12-31 17:21:24,645+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/dd iflag=direct skip=8 bs=512 if=/dev/84b67981-52f7-4f4c-9ff6-48693e1ac375/metadata count=1 (cwd None) (commands:65)
2017-12-31 17:21:24,658+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00108797 s, 471 kB/s\n'; <rc> = 0 (commands:86)
2017-12-31 17:21:24,659+0200 DEBUG (jsonrpc/5) [storage.Misc] err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00108797 s, 471 kB/s'], size: 512 (misc:124)
2017-12-31 17:21:24,660+0200 INFO  (jsonrpc/5) [storage.VolumeManifest] Volume: preparing volume 84b67981-52f7-4f4c-9ff6-48693e1ac375/0e547a89-0a2d-4db1-990a-1c2b719e1cc5 (volume:558)
2017-12-31 17:21:24,660+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/dd iflag=direct skip=8 bs=512 if=/dev/84b67981-52f7-4f4c-9ff6-48693e1ac375/metadata count=1 (cwd None) (commands:65)
2017-12-31 17:21:24,671+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00100811 s, 508 kB/s\n'; <rc> = 0 (commands:86)
2017-12-31 17:21:24,672+0200 DEBUG (jsonrpc/5) [storage.Misc] err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00100811 s, 508 kB/s'], size: 512 (misc:124)
2017-12-31 17:21:24,673+0200 DEBUG (jsonrpc/5) [storage.ResourceManager] Trying to register resource '03_lvm_84b67981-52f7-4f4c-9ff6-48693e1ac375.0e547a89-0a2d-4db1-990a-1c2b719e1cc5' for lock type 'exclusive' (resourceManager:495)
2017-12-31 17:21:24,674+0200 INFO  (jsonrpc/5) [storage.LVM] Activating lvs: vg=84b67981-52f7-4f4c-9ff6-48693e1ac375 lvs=['0e547a89-0a2d-4db1-990a-1c2b719e1cc5'] (lvm:1301)
2017-12-31 17:21:24,675+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600494|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --available y 84b67981-52f7-4f4c-9ff6-48693e1ac375/0e547a89-0a2d-4db1-990a-1c2b719e1cc5 (cwd None) (commands:65)
2017-12-31 17:21:24,871+0200 DEBUG (mpathlistener) [storage.udev] Received udev event (action=add, device=Device(u'/sys/devices/virtual/block/dm-36')) (udev:218)
2017-12-31 17:21:24,916+0200 DEBUG (check/loop) [storage.check] START check '/dev/fc17533e-689c-4cb5-9415-678d8789c56e/metadata' (delay=0.00) (check:282)
2017-12-31 17:21:24,969+0200 DEBUG (check/loop) [storage.asyncevent] Process <subprocess32.Popen object at 0x34e11d0> terminated (count=1) (asyncevent:659)
2017-12-31 17:21:24,970+0200 DEBUG (check/loop) [storage.check] FINISH check '/dev/fc17533e-689c-4cb5-9415-678d8789c56e/metadata' (rc=0, elapsed=0.06) (check:328)
2017-12-31 17:21:24,973+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] SUCCESS: <err> = ''; <rc> = 0 (commands:86)
2017-12-31 17:21:24,974+0200 DEBUG (jsonrpc/5) [storage.ResourceManager] Resource '03_lvm_84b67981-52f7-4f4c-9ff6-48693e1ac375.0e547a89-0a2d-4db1-990a-1c2b719e1cc5' is free. Now locking as 'exclusive' (1 active user) (resourceManager:552)
2017-12-31 17:21:24,975+0200 DEBUG (jsonrpc/5) [storage.ResourceManager.Request] (ResName='03_lvm_84b67981-52f7-4f4c-9ff6-48693e1ac375.0e547a89-0a2d-4db1-990a-1c2b719e1cc5', ReqID='9ca5f5e8-9868-4c99-8fc6-653ba6f049f9') Granted request (resourceManager:222)
2017-12-31 17:21:24,976+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600494|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 84b67981-52f7-4f4c-9ff6-48693e1ac375 (cwd None) (commands:65)
2017-12-31 17:21:25,023+0200 DEBUG (mpathlistener) [storage.udev] Received udev event (action=change, device=Device(u'/sys/devices/virtual/block/dm-36')) (udev:218)
2017-12-31 17:21:25,169+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] SUCCESS: <err> = ''; <rc> = 0 (commands:86)
2017-12-31 17:21:25,172+0200 DEBUG (jsonrpc/5) [storage.LVM] lvs reloaded (lvm:460)
2017-12-31 17:21:25,173+0200 DEBUG (jsonrpc/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/dd iflag=direct skip=8 bs=512 if=/dev/84b67981-52f7-4f4c-9ff6-48693e1ac375/metadata count=1 (cwd None) (commands:65)

2017-12-31 17:21:37,572+0200 INFO  (jsonrpc/2) [api.virt] FINISH diskSizeExtend return={'status': {'message': 'OK', 'code': 0}, 'size': '2147483648'} from=::ffff:10.35.162.7,37738 (api:52)
2017-12-31 17:21:37,572+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.diskSizeExtend succeeded in 2.62 seconds (__init__:573)

Comment 1 Nir Soffer 2018-01-01 23:19:13 UTC
Avihay, in vdsm log we see 2 extends:

$ grep diskSizeExtend vdsm.log 

2018-01-01 14:47:09,119+0200 INFO  (jsonrpc/6) [api.virt] START diskSizeExtend(driveSpecs={'poolID': 'f0b8b007-228d-4038-a22e-424bd6b4237d', 'volumeID': 'b58a451b-7eb9-4de6-a56e-e63698f198b3', 'domainID': '6a50b809-7842-4186-9aab-60ee9c266ca9', 'imageID': 'c4177747-c6e5-4c0a-b5e6-87eb18b973c0'}, newSize='2147483648') from=::ffff:10.35.162.7,37738 (api:46)

2018-01-01 14:47:09,165+0200 INFO  (jsonrpc/6) [api.virt] FINISH diskSizeExtend return={'status': {'message': 'OK', 'code': 0}, 'size': '2147483648'} from=::ffff:10.35.162.7,37738 (api:52)
2018-01-01 14:47:09,166+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call VM.diskSizeExtend succeeded in 0.04 seconds (__init__:573)

2018-01-01 14:51:08,656+0200 INFO  (jsonrpc/0) [api.virt] START diskSizeExtend(driveSpecs={'poolID': 'f0b8b007-228d-4038-a22e-424bd6b4237d', 'volumeID': '5da9260f-3428-4642-a256-b66277c220dc', 'domainID': '6a50b809-7842-4186-9aab-60ee9c266ca9', 'imageID': '3e23e215-a00e-4877-b148-dc9c69ad4805'}, newSize='2147483648') from=::ffff:10.35.162.7,37738 (api:46)

2018-01-01 14:51:08,701+0200 INFO  (jsonrpc/0) [api.virt] FINISH diskSizeExtend return={'status': {'message': 'OK', 'code': 0}, 'size': '2147483648'} from=::ffff:10.35.162.7,37738 (api:52)
2018-01-01 14:51:08,702+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.diskSizeExtend succeeded in 0.05 seconds (__init__:573)

The first extends volume b58a451b-7eb9-4de6-a56e-e63698f198b3 to 2G.
The second extends 5da9260f-3428-4642-a256-b66277c220dc to 2G.

We can also see that these volumes are extended:

$ grep lvextend vdsm.log 

2018-01-01 14:47:04,070+0200 DEBUG (tasks/2) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600492|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 2048m 6a50b809-7842-4186-9aab-60ee9c266ca9/b58a451b-7eb9-4de6-a56e-e63698f198b3 (cwd None) (commands:65)

2018-01-01 14:51:05,441+0200 DEBUG (tasks/5) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600492|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 2048m 6a50b809-7842-4186-9aab-60ee9c266ca9/5da9260f-3428-4642-a256-b66277c220dc (cwd None) (commands:65)


In comment 0 you show that volume 0e547a89-0a2d-4db1-990a-1c2b719e1cc5 size is 1G - but this volume
is not mentioned in vdsm log:

    grep 0e547a89-0a2d-4db1-990a-1c2b719e1cc5 vdsm.log # no output

Looks like everything is correct on vdsm side.

Can you check this:

1. Extend disk
2. Wait until disk is OK
3. Query disk size using REST API - same way we do before uploading.
4. If disk size is still 1g, repeat the query every few seconds, we 
   want to know when we see the correct value.

Comment 2 Avihai 2018-01-02 05:24:10 UTC
(In reply to Nir Soffer from comment #1)
> Avihay, in vdsm log we see 2 extends:
> 
> $ grep diskSizeExtend vdsm.log 
> 
> 2018-01-01 14:47:09,119+0200 INFO  (jsonrpc/6) [api.virt] START
> diskSizeExtend(driveSpecs={'poolID': 'f0b8b007-228d-4038-a22e-424bd6b4237d',
> 'volumeID': 'b58a451b-7eb9-4de6-a56e-e63698f198b3', 'domainID':
> '6a50b809-7842-4186-9aab-60ee9c266ca9', 'imageID':
> 'c4177747-c6e5-4c0a-b5e6-87eb18b973c0'}, newSize='2147483648')
> from=::ffff:10.35.162.7,37738 (api:46)
> 
> 2018-01-01 14:47:09,165+0200 INFO  (jsonrpc/6) [api.virt] FINISH
> diskSizeExtend return={'status': {'message': 'OK', 'code': 0}, 'size':
> '2147483648'} from=::ffff:10.35.162.7,37738 (api:52)
> 2018-01-01 14:47:09,166+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> call VM.diskSizeExtend succeeded in 0.04 seconds (__init__:573)
> 
> 2018-01-01 14:51:08,656+0200 INFO  (jsonrpc/0) [api.virt] START
> diskSizeExtend(driveSpecs={'poolID': 'f0b8b007-228d-4038-a22e-424bd6b4237d',
> 'volumeID': '5da9260f-3428-4642-a256-b66277c220dc', 'domainID':
> '6a50b809-7842-4186-9aab-60ee9c266ca9', 'imageID':
> '3e23e215-a00e-4877-b148-dc9c69ad4805'}, newSize='2147483648')
> from=::ffff:10.35.162.7,37738 (api:46)
> 
> 2018-01-01 14:51:08,701+0200 INFO  (jsonrpc/0) [api.virt] FINISH
> diskSizeExtend return={'status': {'message': 'OK', 'code': 0}, 'size':
> '2147483648'} from=::ffff:10.35.162.7,37738 (api:52)
> 2018-01-01 14:51:08,702+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call VM.diskSizeExtend succeeded in 0.05 seconds (__init__:573)
> 
> The first extends volume b58a451b-7eb9-4de6-a56e-e63698f198b3 to 2G.
> The second extends 5da9260f-3428-4642-a256-b66277c220dc to 2G.
> 
> We can also see that these volumes are extended:
> 
> $ grep lvextend vdsm.log 
> 
> 2018-01-01 14:47:04,070+0200 DEBUG (tasks/2) [storage.Misc.excCmd]
> /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvextend --config
> ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [
> '\''a|/dev/mapper/3514f0c5a51600492|'\'', '\''r|.*|'\'' ] }  global { 
> locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } 
> backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 2048m
> 6a50b809-7842-4186-9aab-60ee9c266ca9/b58a451b-7eb9-4de6-a56e-e63698f198b3
> (cwd None) (commands:65)
> 
> 2018-01-01 14:51:05,441+0200 DEBUG (tasks/5) [storage.Misc.excCmd]
> /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvextend --config
> ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [
> '\''a|/dev/mapper/3514f0c5a51600492|'\'', '\''r|.*|'\'' ] }  global { 
> locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } 
> backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 2048m
> 6a50b809-7842-4186-9aab-60ee9c266ca9/5da9260f-3428-4642-a256-b66277c220dc
> (cwd None) (commands:65)
> 
> 
> In comment 0 you show that volume 0e547a89-0a2d-4db1-990a-1c2b719e1cc5 size
> is 1G - but this volume
> is not mentioned in vdsm log:
> 
>     grep 0e547a89-0a2d-4db1-990a-1c2b719e1cc5 vdsm.log # no output
> 
> Looks like everything is correct on vdsm side.


My bad ,The vdsm log that I uploaded is relevant for image ID =  b58a451b-7eb9-4de6-a56e-e63698f198b3 which is a raw image that was extended which means provisional size indeed changed to 2G but still its actual size remains 1G .

2018-01-01 14:47:03,585+0200 INFO  (jsonrpc/2) [vdsm.api] START extendVolumeSize(spUUID='f0b8b007-228d-4038-a22e-424bd6b4237d', sdUUID='6a50b809-7842-4186-9aab-60ee9c266ca9', imgUUID='c4177747-c6e5-4c0a-b5e6-87eb18b973c0', volUUID='b58a451b-7eb9-4de6-a56e-e63698f198b3', newSize='2147483648') from=::ffff:10.35.162.7,37750, flow_id=c1803873-519d-4d7d-bfbe-1d986caa946f, task_id=c8ef04cb-3fa1-4b63-8f82-5f008f3dbdaa (api:46)


> 
> Can you check this:
> 
> 1. Extend disk
> 2. Wait until disk is OK
> 3. Query disk size using REST API - same way we do before uploading.
> 4. If disk size is still 1g, repeat the query every few seconds, we 
>    want to know when we see the correct value.

Checking via REST disk provisional size changed to 2G but actual size is still 1G .

Query disk size using REST API:

</disk>
<disk href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0" id="c4177747-c6e5-4c0a-b5e6-87eb18b973c0">
<actions>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/export" rel="export"/>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/sparsify" rel="sparsify"/>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/move" rel="move"/>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/refreshlun" rel="refreshlun"/>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/copy" rel="copy"/>
</actions>
<name>test</name>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/c4177747-c6e5-4c0a-b5e6-87eb18b973c0/statistics" rel="statistics"/>
<actual_size>1073741824</actual_size>
<alias>test</alias>
<format>raw</format>
<image_id>b58a451b-7eb9-4de6-a56e-e63698f198b3</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>2147483648</provisioned_size>
<shareable>false</shareable>
<sparse>false</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<total_size>1073741824</total_size>
<wipe_after_delete>false</wipe_after_delete>
<disk_profile href="/ovirt-engine/api/diskprofiles/b8801dd5-de6f-4aef-9cd5-213c68fae783" id="b8801dd5-de6f-4aef-9cd5-213c68fae783"/>
<quota href="/ovirt-engine/api/datacenters/f0b8b007-228d-4038-a22e-424bd6b4237d/quotas/39130c87-898b-4aea-b59d-8b0466da6f48" id="39130c87-898b-4aea-b59d-8b0466da6f48"/>
<storage_domains>
<storage_domain href="/ovirt-engine/api/storagedomains/6a50b809-7842-4186-9aab-60ee9c266ca9" id="6a50b809-7842-4186-9aab-60ee9c266ca9"/>
</storage_domains>
</disk>

Comment 3 Nir Soffer 2018-01-02 11:15:54 UTC
Avihay, can you check the same REST calls with 4.1?

I think that in 4.1 we reported only the virtual size of the disk, so reporting
wrong virtual size in 4.2 is not a regression.

Also, when do we start to report the correct size on 4.2? I assume that eventuall
we will update the actual size in engine report it.

Comment 4 Avihai 2018-01-02 12:57:58 UTC
(In reply to Nir Soffer from comment #3)
> Avihay, can you check the same REST calls with 4.1?
Sure , this is from 4.1 :
<disk href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14" id="4f70775c-0790-46aa-adf5-7c14afe80c14">
<actions>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/sparsify" rel="sparsify"/>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/export" rel="export"/>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/move" rel="move"/>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/copy" rel="copy"/>
</actions>
<name>test2</name>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/4f70775c-0790-46aa-adf5-7c14afe80c14/statistics" rel="statistics"/>
<actual_size>1073741824</actual_size>
<alias>test2</alias>
<format>raw</format>
<image_id>fcb53949-ce1c-4c44-a3ae-bb1f667e7cec</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>2147483648</provisioned_size>
<shareable>false</shareable>
<sparse>false</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<wipe_after_delete>false</wipe_after_delete>
<disk_profile href="/ovirt-engine/api/diskprofiles/f3583dba-39cf-444a-b1b4-7f7f28b528b3" id="f3583dba-39cf-444a-b1b4-7f7f28b528b3"/>
<quota id="af12becd-e7d1-4466-9849-454041415f93"/>
<storage_domains>
<storage_domain href="/ovirt-engine/api/storagedomains/9ef38ced-6a40-4f05-a6a4-9bd54c9d63a0" id="9ef38ced-6a40-4f05-a6a4-9bd54c9d63a0"/>
</storage_domains>
</disk>

> I think that in 4.1 we reported only the virtual size of the disk, so
> reporting
> wrong virtual size in 4.2 is not a regression.
> 
> Also, when do we start to report the correct size on 4.2? I assume that
> eventuall
> we will update the actual size in engine report it.

I saw 2 issues not sure if they are related but this bug was opened on the first one:

Issue #1 :
At the SPM client looking at lvs on lvm size's on one out of 2 raw disks(the one with virtio interface) lvm size stayed the same after extending disk to 2G -> I checked in an 4.1 stand and could not reproduce this issue -> this is why I marked this as regression.

Yesterday this issue reproduce each time (I tried the same scenario for 5 times at least).
Today, also in the same 4.2 I can not reproduce this issue :(, I am not sure what changed -> Anyway , removing the regression keyword.

Issue #2: 
REST disks get response on raw disks, I expect that in a raw disk
'provisioned size' value = 'actual size' value always, right?

->  This issue is indeed not a regression but still, an issue:).
    Does this false report effect anything?

Please tell me how do you want to proceed here as issue #1 is much harder to reproduce then I thought at the beginning.

Comment 5 Avihai 2018-01-03 05:03:03 UTC
I ran the same scenario with download disk as in original scenario & issue reproduced.

Steps to Reproduce(either in UI/RestAPI):
1.Create 4 1G disks , format: raw/qcow with virtio/virtscsi interfaces.
2.Create VM & attach the disks 
3.Edit the disks & extend by 1G & wait until disks states are 'OK' -
4.Deactivate the disks & remove them (without deleting it)
5. download all 4 disks , only at disk image 81683f81-cc15-43a4-93fb-8d2613cf4978 ( raw /virtscsi disk) after 1G download fails . ( All other disks download completes successfully)

Logs attached.

Relevant disk that fails image id = 81683f81-cc15-43a4-93fb-8d2613cf4978.

details:

Automation failed at 2018-01-03 00:19:56 Trying to download this image reaching 1G size :
2018-01-03 00:19:56,365 - ThreadPoolExecutor-3_0 - art.rhevm_api.tests_lib.low_level.streaming_api - INFO - Error: {"explanation": "Access was denied to this resource.", "code": 403, "detail": "Failed response from host: {\"explanation\": \"Access was denied to this resource.\", \"code\": 403, \"detail\": \"Content-Length out of allowed range\", \"title\": \"Forbidden\"}", "title": "Forbidden"}

Image proxy engine log:
(Thread-594) ERROR 2018-01-03 00:19:56,322 images:201:root:(make_imaged_request) Failed: {"explanation": "Access was denied to this resource.", "code": 403, "detail": "Content-Length out of allowed range", "title": "Forbidden"}

From Host deamon.log last input was downloading last 64M chunk reaching 1G (1006632960B + 64*1024*1024 =1073741824B = 1GB)  :
2018-01-03 00:18:55,531 INFO    (Thread-3371) [images] Reading 67108864 bytes at offset 1006632960 from /rhev/data-center/mnt/blockSD/84b67981-52f7-4f4c-9ff6-48693e1ac375/images/d77083c7-9ffe-495c-8647-acd2dcad1e74/81683f81-cc15-43a4-93fb-8d2613cf4978 for ticket e0b8bf9f-b9b0-4ce0-b537-113d2eeb25fb



VDSM log:
- Extend was done at 2018-01-03 00:00:52:
2018-01-03 00:00:52,392+0200 INFO  (jsonrpc/0) [vdsm.api] START extendVolumeSize(spUUID='f0b8b007-228d-4038-a22e-424bd6b4237d', sdUUID='84b67981-52f7-4f4c-9ff6-48693e1ac375', imgUUID='d77083c7-9ffe-495c-8647-acd2dcad1e74', volUUID='81683f81-cc15-43a4-93fb-8d2613cf4978', newSize='2147483648') from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_cd5179c9-3b95, task_id=671de9c2-da25-4e2b-b0a3-6217db0c030b (api:46)
018-01-03 00:00:57,745+0200 DEBUG (jsonrpc/7) [storage.TaskManager] scheduled job extendVolumeSize for task 953bcfd2-b012-47c4-a89f-d8648d0790dc  (taskManager:79)
2018-01-03 00:00:57,746+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH extendVolumeSize return=None from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_815d3258-0198, task_id=953bcfd2-b012-47c4-a89f-d8648d07
90dc (api:52)

All 4 disks finished extend via VDSM ( also the problematic image  d77083c7-9ffe-495c-8647-acd2dcad1e74) but still 19 minutes later at  00:19:56 , download failed at 1G instead of continuing to download 2G .
2018-01-03 00:00:52,447+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH extendVolumeSize return=None from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_cd5179c9-3b95, task_id=671de9c2-da25-4e2b-b0a3-6217db0c030b (api:52)

2018-01-03 00:00:54,966+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH extendVolumeSize return=None from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_394fa73d-6eaf, task_id=fd37b3c6-867c-43a1-876c-613f5033addf (api:52)

2018-01-03 00:00:57,746+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH extendVolumeSize return=None from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_815d3258-0198, task_id=953bcfd2-b012-47c4-a89f-d8648d0790dc (api:52)

2018-01-03 00:01:00,429+0200 INFO  (jsonrpc/5) [vdsm.api] FINISH extendVolumeSize return=None from=::ffff:10.35.162.7,37750, flow_id=diskattachments_update_be494117-0f66, task_id=f84881dd-d2da-4624-9e13-454e03b8b4a9 (api:52)

-Download image id 81683f81-cc15-43a4-93fb-8d2613cf4978 finished reaching 1G:
2018-01-03 00:19:52,102+0200 INFO  (jsonrpc/3) [vdsm.api] FINISH get_image_ticket return={'result': {u'uuid': u'e0b8bf9f-b9b0-4ce0-b537-113d2eeb25fb', u'ops': [u'read'], u'url': u'file:///rhev/data-center/mnt/bl
ockSD/84b67981-52f7-4f4c-9ff6-48693e1ac375/images/d77083c7-9ffe-495c-8647-acd2dcad1e74/81683f81-cc15-43a4-93fb-8d2613cf4978', u'expires': 6565579, u'transferred': 1068498944, u'timeout': 277, u'active': True, u'
size': 1073741824}} from=::ffff:10.35.162.7,37738, flow_id=1f6831d4-09b3-4b51-aafc-e3f233db9651, task_id=dfaf3eb2-6d55-40fe-a4b6-ecfa08c71e46 (api:52)
2018-01-03 00:19:52,103+0200 DEBUG (jsonrpc/3) [storage.TaskManager.Task] (Task='dfaf3eb2-6d55-40fe-a4b6-ecfa08c71e46') finished: {'result': {u'uuid': u'e0b8bf9f-b9b0-4ce0-b537-113d2eeb25fb', u'ops': [u'read'], 
u'url': u'file:///rhev/data-center/mnt/blockSD/84b67981-52f7-4f4c-9ff6-48693e1ac375/images/d77083c7-9ffe-495c-8647-acd2dcad1e74/81683f81-cc15-43a4-93fb-8d2613cf4978', u'expires': 6565579, u'transferred': 1068498
944, u'timeout': 277, u'active': True, u'size': 1073741824}} (task:1201)

Last thing on that was done on the image:
018-01-03 00:20:03,729+0200 INFO  (jsonrpc/4) [storage.LVM] Deactivating lvs: vg=84b67981-52f7-4f4c-9ff6-48693e1ac375 lvs=['81683f81-cc15-43a4-93fb-8d2613cf4978'] (lvm:1309)
2018-01-03 00:20:03,730+0200 DEBUG (jsonrpc/4) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600494|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --available n 84b67981-52f7-4f4c-9ff6-48693e1ac375/81683f81-cc15-43a4-93fb-8d2613cf4978 (cwd None) (commands:65)



This time on raw disk with virtscsi interface :

 lvs | grep 81683f81-cc15-43a4-93fb-8d2613cf4978
  81683f81-cc15-43a4-93fb-8d2613cf4978 84b67981-52f7-4f4c-9ff6-48693e1ac375 -wi-------   2.00g   

REST GET ON THE DISK:
<disk href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74" id="d77083c7-9ffe-495c-8647-acd2dcad1e74">
<actions>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/export" rel="export"/>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/sparsify" rel="sparsify"/>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/move" rel="move"/>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/refreshlun" rel="refreshlun"/>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/copy" rel="copy"/>
</actions>
<name>disk_virtio_scsiraw_0223582938</name>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/d77083c7-9ffe-495c-8647-acd2dcad1e74/statistics" rel="statistics"/>
<actual_size>1073741824</actual_size>
<alias>disk_virtio_scsiraw_0223582938</alias>
<format>raw</format>
<image_id>81683f81-cc15-43a4-93fb-8d2613cf4978</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>2147483648</provisioned_size>
<shareable>false</shareable>
<sparse>false</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<total_size>1073741824</total_size>
<wipe_after_delete>true</wipe_after_delete>
<disk_profile href="/ovirt-engine/api/diskprofiles/6f2e62c4-8fa9-4b17-a2d2-54ffe99ea988" id="6f2e62c4-8fa9-4b17-a2d2-54ffe99ea988"/>
<quota href="/ovirt-engine/api/datacenters/f0b8b007-228d-4038-a22e-424bd6b4237d/quotas/39130c87-898b-4aea-b59d-8b0466da6f48" id="39130c87-898b-4aea-b59d-8b0466da6f48"/>

Comment 6 Avihai 2018-01-03 05:04:15 UTC
Created attachment 1376126 [details]
engine , vdsm logs

Comment 7 Avihai 2018-01-03 05:07:02 UTC
Created attachment 1376127 [details]
image download logs

Comment 8 Daniel Erez 2018-01-07 13:39:00 UTC
@Nir - do we need anything else here?

Comment 9 Nir Soffer 2018-01-07 13:41:56 UTC
(In reply to Daniel Erez from comment #8)
> @Nir - do we need anything else here?

What do you mean?

Comment 10 Daniel Erez 2018-01-08 16:20:07 UTC
Seems that we didn't update the actual size of the image after extending (on disk_image_dynamic). The suggested fix addresses it.

Comment 11 Avihai 2018-01-22 11:58:47 UTC
verified on 4.2.1.2

Comment 12 Sandro Bonazzola 2018-02-12 11:49:48 UTC
This bugzilla is included in oVirt 4.2.1 release, published on Feb 12th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.