Bug 712911

Summary: [vdsm] duplicate lvextend messages on same lv (between 2-5 messages)
Product: [Retired] oVirt Reporter: Haim <hateya>
Component: vdsmAssignee: Dan Kenigsberg <danken>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: abaron, acathrow, amureini, bazulay, dyasny, hateya, iheim, mgoldboi, yeylon, ykaul
Target Milestone: ---   
Target Release: 3.3.4   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-01-30 22:51:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
vdsm log none

Description Haim 2011-06-13 14:55:05 UTC
Created attachment 504467 [details]
vdsm log

Description of problem:

on qcow scenario, when actual size is consumed, and high water mark is reached, lvexetend message is sent to SPM, in my case, I see duplicate messages which are not answered;

write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 7168m 0959db5e-eb92-4b10-a476-2b983036eeb2/252b16f3-34e6-4fe1-82f4-c8f727e8d02c' (cwd None)
write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 7168m 0959db5e-eb92-4b10-a476-2b983036eeb2/252b16f3-34e6-4fe1-82f4-c8f727e8d02c' (cwd None)
write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 7168m 0959db5e-eb92-4b10-a476-2b983036eeb2/252b16f3-34e6-4fe1-82f4-c8f727e8d02c' (cwd None)
write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 7168m 0959db5e-eb92-4b10-a476-2b983036eeb2/252b16f3-34e6-4fe1-82f4-c8f727e8d02c' (cwd None)
write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 7168m 0959db5e-eb92-4b10-a476-2b983036eeb2/252b16f3-34e6-4fe1-82f4-c8f727e8d02c' (cwd None)

this is NOT high priority issue as eventually SPM handles lvextend request.

repro steps:

1) VIRT-IO disk, actual size (lv size 1G), allocated size 20G (virtual)
2) dd disk till its full - grep lvextend on spm.


vdsm-4.9-74.el6.x86_64
qemu-kvm-0.12.1.2-2.164.el6.x86_64
libvirt-0.9.1-1.el6.x86_64

regression - not sure.

Comment 2 Dan Kenigsberg 2011-06-13 15:58:37 UTC
It's interesting  to understand if this is just an effect of the qemu bug, but with no functional effects, this has nothing to do in rhel-6.2.0.

Comment 3 Itamar Heim 2013-01-30 22:51:11 UTC
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.