Bug 1127607

Summary: PRD35 - [RFE] abillity to get device name/serial id information from the guest
Product: Red Hat Enterprise Virtualization Manager Reporter: Liron Aravot <laravot>
Component: ovirt-guest-agentAssignee: Vinzenz Feenstra [evilissimo] <vfeenstr>
Status: CLOSED CURRENTRELEASE QA Contact: Jiri Belka <jbelka>
Severity: high Docs Contact:
Priority: urgent    
Version: 3.3.0CC: acanan, amureini, bazulay, ecohen, gklein, iheim, jshortt, juan.hernandez, juwu, laravot, linux26port, mgoldboi, michal.skrivanek, mkenneth, oramraz, ratamir, rbalakri, Rhev-m-bugs, scohen, sherold, yeylon
Target Milestone: ---Keywords: FutureFeature
Target Release: 3.5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: virt
Fixed In Version: Doc Type: Enhancement
Doc Text:
A new feature that allows mapping of disk images in RHEVM to physical disks inside the guest.
Story Points: ---
Clone Of: 1063597 Environment:
Last Closed: 2015-02-17 08:27:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1063597, 1156162    

Comment 1 Vinzenz Feenstra [evilissimo] 2014-08-21 08:40:43 UTC
Testing this depends on the BZ#1063597 implementation.

Once the feature is completed on the VDSM side, `vdsClient -s 0 list full` should expose the new field 'diskMapping' with the following format: 
{'$serial': {'name': '$name'}}

where $serial reflects the first 20 characters of the image ID and $name for linux is something like '/dev/sda'

Please note that in theory there can be other storage devices reported e.g. USB drives or SCSI pass through which of course won't reflect the imageID.

So not every device in the structure have to be an image.

Comment 2 Jiri Belka 2014-10-24 12:34:05 UTC
ok, vdsm-4.16.7-1.el6ev.x86_64 / rhevm-guest-agent-common-1.0.10-2.el6ev.noarch

vfeenstr@ code works, GA is sending 'mapping' and vdsm can get it. Although there's missing vdsm part for using this data - BZ1063597.

I exploited disksUsage to display 'mapping' just for verification of this part.

--- /tmp/guestagent.py  2014-10-23 14:37:18.350107664 +0200
+++ /usr/share/vdsm/virt/guestagent.py  2014-10-23 14:38:47.232039716 +0200
@@ -303,8 +303,9 @@
                 disk['total'] = str(disk['total'])
                 disk['used'] = str(disk['used'])
                 disks.append(disk)
-            self.guestInfo['disksUsage'] = disks
+#            self.guestInfo['disksUsage'] = disks
             self.guestDiskMapping = args.get('mapping', {})
+            self.guestInfo['disksUsage'] = args.get('mapping',{})
         elif message == 'number-of-cpus':
             self.guestInfo['guestCPUCount'] = int(args['count'])
         else:

* rhel6.6

# vdsClient -s 0 getAllVmStats  | grep disksUsage
        disksUsage = {'c42d5431-b104-47d8-b': {'name': '/dev/vda'}}
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ mapping

* windows 7 64bit

# vdsClient -s 0 getAllVmStats | grep disksUsage
        disksUsage = {'631781c0-b748-4700-b': {'name': '\\\\.\\PHYSICALDRIVE0'}}

Comment 3 Jiri Belka 2014-10-29 14:01:30 UTC
no all windozes are happy - BZ1158501

Comment 4 Jiri Belka 2014-10-29 14:14:48 UTC
back to verified, it seems to be windows related (above is 'hardware' set to 'Linux') and also seems to be some delay related issue on windows OSes.

Comment 5 Omer Frenkel 2015-02-17 08:27:25 UTC
RHEV-M 3.5.0 has been released