Bug 476746 - Can't attach phy devices to RHEL 5.3 HVM x86_64 guests
Summary: Can't attach phy devices to RHEL 5.3 HVM x86_64 guests
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: xen
Version: 5.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Michal Novotny
QA Contact: Gurhan Ozen
URL:
Whiteboard:
Depends On:
Blocks: 514498 5.6-Known_Issues
TreeView+ depends on / blocked
 
Reported: 2008-12-16 21:36 UTC by Gurhan Ozen
Modified: 2014-02-02 22:36 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When hotplugging block devices to an HVM guest using the PV-on-HVM drivers, always attach the disks as /dev/xvd* devices. When you try to attach as some other device then xvd*, the disk attach may fail.
Clone Of:
Environment:
Last Closed: 2010-09-23 15:08:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Gurhan Ozen 2008-12-16 21:36:19 UTC
Description of problem:

Can't attach physical devices to rhel 5.3 x86_64 hvm guests:

On dom0:
# virsh attach-disk RH53_x86_64_hvm_guest /dev/cciss/c0d0p1 /dev/hdd --driver phy --mode shareable

On guest:
Registering block device major 22
register_blkdev: cannot get major 22 for ide
xen_blk: can't get major 22 with name ide
vbd vbd-5696: 19 xlvbd_add at /local/domain/0/backend/vbd/1/5696

On dom0:
# virsh attach-disk RH53_x86_64_hvm_guest /dev/cciss/c0d0p1 /dev/sdd --driver phy --mode shareable

On guest:
Registering block device major 8
register_blkdev: cannot get major 8 for sd
xen_blk: can't get major 8 with name sd
vbd vbd-2096: 19 xlvbd_add at /local/domain/0/backend/vbd/1/2096


In both instances xend thinks that the devices are legitimately attached and when a detach command is executed, xend goes haywire and loses domain info even though it's there and becomes very unstable.


Version-Release number of selected component (if applicable):
xen-3.0.3-79.el5

How reproducible:
Very

Steps to Reproduce:
1. see above
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Chris Lalancette 2008-12-16 21:57:18 UTC
Actually, this is expected behavior.  The problem is that PV-on-HVM has this really messed up notion where you can attach a /dev/sd? or /dev/hd? device to a guest.  However, that's really stupid; /dev/sd is reserved for SCSI, and /dev/hd is reserved for IDE, and that is indeed the error you are seeing.  It works in PV guest domains strictly by accident, since we don't load sd or IDE there, but it really shouldn't work.

Really, you should never attach anything but a /dev/xvd? device to a PV or PV-on-HVM domain.  I think we used to have a release note or something about this, but I'm not sure if we do anymore.  I think the best we can do here is documentation.

I'm going to close this bug as "NOTABUG" for now; but if you want to use it for documentation purposes, feel free to re-open it.

Chris Lalancette

Comment 2 Gurhan Ozen 2009-01-28 19:17:00 UTC
(In reply to comment #1)
> Actually, this is expected behavior.  The problem is that PV-on-HVM has this
> really messed up notion where you can attach a /dev/sd? or /dev/hd? device to a
> guest.  However, that's really stupid; /dev/sd is reserved for SCSI, and
> /dev/hd is reserved for IDE, and that is indeed the error you are seeing.  It
> works in PV guest domains strictly by accident, since we don't load sd or IDE
> there, but it really shouldn't work.
> 
> Really, you should never attach anything but a /dev/xvd? device to a PV or
> PV-on-HVM domain.  I think we used to have a release note or something about
> this, but I'm not sure if we do anymore.  I think the best we can do here is
> documentation.
> 
> I'm going to close this bug as "NOTABUG" for now; but if you want to use it for
> documentation purposes, feel free to re-open it.
> 
> Chris Lalancette

I am reopening this bug because apparently despite what's noted above the customers do indeed use scsi drivers. I think this will particularly be a problem with rhel 5 guest, because rhel 5 hvm guests, by default,  are  pv-on-hvm domains so according to the note above, nothing but /dev/xvd* should be attached. This indeed is not the case as there have been a lot of inquries about testing of scsi emulation in guests. We should either fully support it, or just prevent it from happening totally .

Comment 3 Chris Lalancette 2009-02-02 08:12:36 UTC
Well, you have to be careful about "scsi emulation".  It is quite a complicated situation, unfortunately :(.  What kind of scsi emulation you use depends heavily on whether you are using PV-on-HVM or not.

If you are *not* using PV-on-HVM drivers in the guest, then you get true "scsi emulation"; the backend in qemu looks like a SCSI card, and you need to have the appropriate scsi driver (LSI buslogic, I think, but don't quote me) loaded in the guest.

If you are using PV-on-HVM drivers in the guest, then the situation is completely different.  In that case, the backend actually looks like a xvd device; it's just attached inside the guest using a /dev/sd *name*, which is where the conflict between having the scsi module and having the pv-on-hvm module loaded comes from.

The thing is, from my perspective, we can't prevent either use at this point; while using the pv-on-hvm drivers to drive scsi is a stupid design decision in Xen, it's worked (for some definition of worked) since RHEL 5.0 now, not to mention compatibility with other Xen implementations.

So I'm not sure what the solution would be; we can't remove the (mis-)feature, but I'm also not sure how we would make it work better either.

In any case, I'll leave this open for now.

Chris Lalancette

Comment 6 Michal Novotny 2010-06-28 16:03:38 UTC
I did try to reproduce but I was seeing only:

# virsh attach-disk rhel5-64fv /dev/sda1 /dev/sda --driver phy --mode shareable
error: internal error Invalid harddisk device name: /dev/sda1

This should connect /dev/sda1 as a raw image to the guest as /dev/sda device but the error message is printed instead. A bug or I am doing something wrong? RHEL5-64fv is x86_64 RHEL-5.3 guest and the device /dev/sda1 was the physical partition to be passed to the guest directly.

But when I tried to use xm instead I was not seeing in the guest.

# xm block-attach rhel5-64fv /dev/sda1 sda r
# xm block-list rhel5-64fv
Vdev  BE handle state evt-ch ring-ref BE-path
768    0    0     6      4      8     /local/domain/0/backend/vbd/6/768  
2048    0    0     3      7      1284  /local/domain/0/backend/vbd/6/2048  
# xenstore-ls /local/domain/0/backend/vbd/6/2048
domain = "rhel5-64fv"
frontend = "/local/domain/6/device/vbd/2048"
format = "raw"
dev = "sda"
state = "2"
params = ""
mode = "r"
online = "1"
frontend-id = "6"
type = ""
hotplug-status = "connected"
# 

domU# ls /dev/sd*
ls: /dev/sd*: No such file or directory
# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 5.3 (Tikanga)

So I was not seeing anything like:
Registering block device major 8
register_blkdev: cannot get major 8 for sd
xen_blk: can't get major 8 with name sd
vbd vbd-2096: 19 xlvbd_add at /local/domain/0/backend/vbd/1/2096

as described in comment 0.

Am I do something wrong ? The device should be there, right ?

Thanks,
Michal

Comment 7 Michal Novotny 2010-07-13 16:17:23 UTC
Well, for HVM guest (RHEL-5.3 x86_64) I did try attaching the LVM volume and it passed nevertheless the /dev/sda device was not seen at all (then I discovered I've been using PV-on-HVM drivers so I disabled this in the modprobe.d/blacklist file and rebooted the guest).

# xm block-attach rhel5-64fv phy:/dev/mapper/testVolume-test1G sda r
...
# xm console rhel5-64fv
...
Registering block device major 3
register_blkdev: cannot get major 3 for ide
xen_blk: can't get major 3 with name ide
vbd vbd-768: 19 xlvbd_add at /local/domain/0/backend/vbd/5/768
Registering block device major 8
register_blkdev: cannot get major 8 for sd
xen_blk: can't get major 8 with name sd
vbd vbd-2048: 19 xlvbd_add at /local/domain/0/backend/vbd/5/2048
netfront: Initialising virtual ethernet driver.
netfront: device eth1 has copying receive path
...

But when I was not using the PV-on-HVM drivers (it took me some time to find a workaround by blacklisting them in the guest since they were not present in the config file but still loaded to the guest) I was getting this in dmesg:

...
input: PC Speaker as /class/input/input2
  Vendor: QEMU      Model: QEMU HARDDISK     Rev: 0.8.
  Type:   Direct-Access                      ANSI SCSI revision: 03
 target0:0:0: tagged command queuing enabled, command queue depth 16.
 target0:0:0: Beginning Domain Validation
 target0:0:0: Domain Validation skipping write tests
 target0:0:0: Ending Domain Validation
SCSI device sda: 2097152 512-byte hdwr sectors (1074 MB)
sda: Write Protect is on
sda: Mode Sense: 13 00 80 00
SCSI device sda: drive cache: write back
SCSI device sda: 2097152 512-byte hdwr sectors (1074 MB)
sda: Write Protect is on
sda: Mode Sense: 13 00 80 00
SCSI device sda: drive cache: write back
 sda: sda1
sd 0:0:0:0: Attached scsi disk sda
...

And this is using emulated LSI SCSI (specifically LSI53C895A) controller and drivers. Like Chris wrote, the situation is different when using PV-on-HVM drivers and we can't know that the guest will be using PV-on-HVM drivers or not (I was not having the PV drivers setup in the domain configuration file anymore but it remembered that I've been using this setup some time ago so the xen drivers got loaded into the guest so I had to blacklist not to use them and use native drivers - i.e. the Realtek 8139 NIC and LSI SCSI drivers) - maybe those drivers could write some information about their presence to the xenstore which could allow tools (xm/virsh) to know about their presence and disallow running the block-attach command this way but I'm not sure about this.

My guest configuration was having:

    (device
        (vbd
            (backend 0)
            (dev sda:disk)
            (uname phy:/dev/mapper/testVolume-test1G)
            (mode r)
        )
    )

when running `xm li -l rhel5-64fv` so the device was present as a physical volume (LVM backed volume in this case) and the data on the volume were accessible when mounted so that's expected behaviour.

Also, I was looking for some KBase article about this one and I can't find any so I'm editing the technical note field here according to my observations and testing. Correct me if I'm wrong please.

Thanks,
Michal

Comment 8 Michal Novotny 2010-07-13 16:17:23 UTC
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.

New Contents:
If you're using the PV-on-HVM drivers in your HVM guests you cannot attach the SCSI block device since it's being emulated. Only adding the block devices to the emulated disks (i.e. using the native drivers) is supported.

Comment 9 Michal Novotny 2010-07-13 16:19:31 UTC
Technical note updated. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.

Diffed Contents:
@@ -1 +1 @@
-If you're using the PV-on-HVM drivers in your HVM guests you cannot attach the SCSI block device since it's being emulated. Only adding the block devices to the emulated disks (i.e. using the native drivers) is supported.+If you're using the PV-on-HVM drivers in your HVM guests you cannot attach the SCSI block devices since it's using the PV drivers. Only adding the block devices to the emulated disks  (i.e. using the native drivers) for HVM guest is supported.

Comment 10 Chris Lalancette 2010-07-13 19:19:38 UTC
Hm, I think that release note is confusing.  It almost seems to read that you can't do block attach while the PV-on-HVM drivers are running, which is clearly not the case.  Instead, I think we should say what the user should do, instead of what they shouldn't do.  Something like:

When trying to hotplug block devices to an HVM guest using the PV-on-HVM drivers, always attach the disks as /dev/xvd* devices.  The use of other paths for the disks may cause the disk attach to fail.

Chris Lalancette

Comment 11 Michal Novotny 2010-07-14 11:50:51 UTC
(In reply to comment #10)
> Hm, I think that release note is confusing.  It almost seems to read that you
> can't do block attach while the PV-on-HVM drivers are running, which is clearly
> not the case.  Instead, I think we should say what the user should do, instead
> of what they shouldn't do.  Something like:
> 
> When trying to hotplug block devices to an HVM guest using the PV-on-HVM
> drivers, always attach the disks as /dev/xvd* devices.  The use of other paths
> for the disks may cause the disk attach to fail.
> 
> Chris Lalancette    

Well, thanks Chris. I got confused myself and therefore such a comment. I did try to attach the physical device to be /dev/xvdb instead of /dev/sda and according to my testing using the `xm block-attach rhel5-64fv phy:/dev/mapper/testVolume-test1G xvdb r` command it added the device and dmesg was showing following lines relevant to adding this device:
...
Registering block device major 202
 xvdb: xvdb1
...

When I mounted the drive I was able to access the data. Also detaching and reattaching as read-write disk was working fine so this is the expected behavior. I'm changing technical note now according to this testing which is just the confirmation it's working exactly like Chris wrote.

Thanks Chris,
Michal

Comment 12 Michal Novotny 2010-07-14 11:50:51 UTC
Technical note updated. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.

Diffed Contents:
@@ -1 +1 @@
-If you're using the PV-on-HVM drivers in your HVM guests you cannot attach the SCSI block devices since it's using the PV drivers. Only adding the block devices to the emulated disks  (i.e. using the native drivers) for HVM guest is supported.+When trying to hotplug block devices to an HVM guest using the PV-on-HVM drivers, always attach the disks as /dev/xvd* devices. When you try to attach as some other device then xvd*, the disk attach may fail.

Comment 13 Michal Novotny 2010-07-21 07:38:20 UTC
After proofreading by Engineering Content Services feel free to close it.

Michal

Comment 15 Ryan Lerch 2011-01-05 04:37:58 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1 +1 @@
-When trying to hotplug block devices to an HVM guest using the PV-on-HVM drivers, always attach the disks as /dev/xvd* devices. When you try to attach as some other device then xvd*, the disk attach may fail.+When hotplugging block devices to an HVM guest using the PV-on-HVM drivers, always attach the disks as /dev/xvd* devices. When you try to attach as some other device then xvd*, the disk attach may fail.


Note You need to log in before you can comment on or make changes to this bug.