Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 923955

Summary: [RFE] messages from vdsm that storage domain volume group is not big enough are not propagated in events tab and only in VDSM logs (during live storage migration)
Product: [oVirt] ovirt-engine Reporter: Haim <hateya>
Component: RFEsAssignee: Itamar Heim <iheim>
Status: CLOSED WONTFIX QA Contact: Raz Tamir <ratamir>
Severity: medium Docs Contact:
Priority: unspecified    
Version: ---CC: bsettle, bugs, lpeer, rbalakri, Rhev-m-bugs, srevivo, ylavi
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: ylavi: ovirt-future?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-26 12:59:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine & vdsm log none

Description Haim 2013-03-20 19:02:34 UTC
Description of problem:

during live storage migration, I get the following error in VDSM log:

304f037f-8aa9-4809-adce-85ff594673d3::ERROR::2013-03-20 20:57:26,760::storage_mailbox::153::Storage.SPM.Messages.Extend::(processRequest) processRequest: Exception caught while trying to extend volume: 2bfb1a73-4546-40f6-a5cc-17763e4c159f in domain: 4b14d3b3-7da4-4ed5-88d0-bfb31f3b8df7
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/storage_mailbox.py", line 149, in processRequest
    pool.extendVolume(volume['domainID'], volume['volumeID'], size)
  File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper
    return f(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1324, in extendVolume
    sdCache.produce(sdUUID).extendVolume(volumeUUID, size, isShuttingDown)
  File "/usr/share/vdsm/storage/blockSD.py", line 1191, in extendVolume
    lvm.extendLV(self.sdUUID, volumeUUID, size) #, isShuttingDown) # FIXME
  File "/usr/share/vdsm/storage/lvm.py", line 1093, in extendLV
    free_size / constants.MEGAB))
VolumeGroupSizeError: Volume Group not big enough: ('4b14d3b3-7da4-4ed5-88d0-bfb31f3b8df7/2bfb1a73-4546-40f6-a5cc-17763e4c159f 104448 > 96640 (MiB)',)

this indicates something bad is or going to happen, and as an administrator, I think it would be wise to expose it in the UI (events tab).

repro steps:

1) 2 hosts
2) run VM - install O.S
3) start live migration of 100G disk to a 200G storage domain

Comment 1 Haim 2013-03-20 19:07:06 UTC
Created attachment 713418 [details]
engine & vdsm log

Comment 2 Ayal Baron 2013-07-07 07:25:22 UTC
Since this is not a call that engine made the only way to report this is by having events from vdsm to engine.
I believe there is work on such a mechanism, but until then there isn't much we can do.
Postponing this to keep track of it once that infrastructure exists.