Bug 924347
| Summary: | [engine-backend] engine reports a wrong vm's disk size when the disk is 100% full (thin provision) | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Elad <ebenahar> | ||||
| Component: | ovirt-engine | Assignee: | Vered Volansky <vered> | ||||
| Status: | CLOSED WORKSFORME | QA Contact: | Elad <ebenahar> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.2.0 | CC: | acathrow, amureini, derez, iheim, jkt, lpeer, Rhev-m-bugs, scohen, sgotliv, yeylon | ||||
| Target Milestone: | --- | Flags: | abaron:
Triaged+
|
||||
| Target Release: | 3.5.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-08-28 13:52:18 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
What do you mean it's reporting the wrong size, what is it reporting? We do monitor the disk size while the VM is running with vmgetstats. Possibly we should get disk sizes when VM changes state to down in order to get most up to date sizes though What I meant is the thin provision disk actual size that the egine reports about. I suggest a higher sampling frequency of disk actual size with thin provision disks. Hi Elad, Can you please attach the output of getVmStats on the affected VM (in order to verify that VDSM is reporting the correct disk size). In addition, please mention disk's size in the file system. the disk size in the file system is 4.5 GB.
engine reports that actual disk size is 1GB
01f365f3-0085-4531-bd8c-fc81936d577c
Status = Up
username = Unknown
memUsage = 0
acpiEnable = true
session = Unknown
displaySecurePort = 5901
timeOffset = -43200
balloonInfo = {'balloon_max': 524288, 'balloon_cur': 524288}
network = {'vnet0': {'macAddr': '00:1a:4a:23:a1:32', 'rxDropped': '0', 'txDropped': '0', 'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': 'vnet0'}}
vmType = kvm
cpuUser = 0.44
elapsedTime = 1269
displayType = qxl
cpuSys = 0.66
appsList = []
hash = -3222232216653230493
pid = 15937
displayIp = 0
displayPort = 5900
guestIPs =
kvmEnable = true
disks = {'vda': {'readLatency': '0', 'apparentsize': '6442450944', 'writeLatency': '682385', 'imageID': '18b1b70d-f39e-4ac0-8a2d-07fb7ec634ce', 'flushLatency': '127079551', 'readRate': '0.00', 'truesize': '6442450944', 'writeRate': '473.59'}, 'hdc': {'readLatency': '0', 'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0', 'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}}
monitorResponse = 0
statsAge = 0.32
clientIp = 10.35.3.197
Sergey, is this related to the size issue in ISO domains? Tried to reproduce. It seems like the reporting of the disk size by engine is right when writing to a VM disk from the guest |
Created attachment 713936 [details] vdsm+engine logs Description of problem: When vm's disk get 100% full, engine reports a wrong disk size. Happened to me with a thin provision disk. In my case, I had 1 thin provision disk with 20GB virtual size. I filled lv_root up to 100% using dd. Version-Release number of selected component (if applicable): vdsm-4.10.2-11.0.el6ev.x86_64 libvirt-0.10.2-18.el6_4.2.x86_64 rhevm-backend-3.2.0-10.14.beta1.el6ev.noarch How reproducible: 100% Steps to Reproduce: In a 2 host and one iscsi SD: 1. Run 1 VM with 1 thin provision disk. 2. run: dd if=dev/zero of=tmp/file_name bs 1M Actual results: The disk will get full and the engine will report a wrong disk size. Expected results: The engine should monitor the disk size and report the right size of it. Additional info: see logs attached