Bug 1091163

Summary: 'filename' and 'file' should not be none when use nbd storage backend for virtual block device
Product: Red Hat Enterprise Linux 7 Reporter: Sibiao Luo <sluo>
Component: qemu-kvmAssignee: Hanna Czenczek <hreitz>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: low Docs Contact:
Priority: low    
Version: 7.0CC: chayang, famz, hhuang, juzhang, kwolf, michen, mrezanin, pbonzini, qzhang, rbalakri, sluo, virt-maint, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-11-04 08:14:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sibiao Luo 2014-04-25 05:16:58 UTC
Description of problem:
launch a KVM guest with a nbd storage backend for virtual block device, but check the QMP or HMP monitor that 'filename' and 'file' is none.

Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm-rhev
3.10.0-121.el7.x86_64
qemu-kvm-rhev-1.5.3-60.el7ev.x86_64
# rpm -q nbd
nbd-2.9.20-3.el7.x86_64
guest info:
# uname -r
3.10.0-121.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.start nbd-server to export a qcow2 image with absolute path on the NBD server host.
# nbd-server 12345 /home/my-data-disk.qcow2
2.launch a KVM guest with this exported image as a data disk.
# qemu-img info nbd:10.66.83.171:12345
image: 
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: unavailable
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
e.g:...-drive file=nbd:10.66.83.171:12345,if=none,id=drive-data-disk,format=qcow2,cache=none,aio=native -device virtio-scsi-pci,id=scsi1,addr=0x7,bus=pci.0 -device scsi-hd,bus=scsi1.0,drive=drive-data-disk,id=data-disk,bootindex=2
3.check the block device info via QMP/HMP.
(qemu) info block
QMP:   {"execute":"query-block"}

Actual results:
after step 3,
(qemu) info block
drive-system-disk: removable=0 io-status=ok file=/home/RHEL-7.0-20140409.0_Server_x86_64.qcow2bk ro=0 drv=qcow2 encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
drive-data-disk: removable=0 io-status=ok file= ro=0 drv=qcow2 encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
...
{"execute":"query-block"}
{"return": [...{"io-status": "ok", "device": "drive-data-disk", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "image": {"virtual-size": 10737418240, "filename": "", "cluster-size": 65536, "format": "qcow2", "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "lazy-refcounts": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "file": "", "encryption_key_missing": false}, "type": "unknown"}...]}

Expected results:
file and filename should be nbd:10.66.83.171:12345.
"filename": "nbd:10.66.83.171:12345"
"name": "nbd:10.66.83.171:12345"

Additional info:

Comment 1 Hanna Czenczek 2014-11-03 09:33:06 UTC
Hi,

As can be seen from the status, I posted a backport which however lacked some patches. Another backport with all the required patches would be rather complicated to fix this issue, also we are past devel freeze.

Would it be okay with you to move this to RHEV? If not, I will move it to RHEL 7.2.

Max

Comment 2 Sibiao Luo 2014-11-04 02:28:35 UTC
(In reply to Max Reitz from comment #1)
> Hi,
> 
> As can be seen from the status, I posted a backport which however lacked
> some patches. Another backport with all the required patches would be rather
> complicated to fix this issue, also we are past devel freeze.
> 
> Would it be okay with you to move this to RHEV? If not, I will move it to
> RHEL 7.2.
> 
Ok, no problem for me to move to rhev, thanks.

Comment 3 Hanna Czenczek 2014-11-04 08:14:17 UTC

*** This bug has been marked as a duplicate of bug 1135385 ***