Bug 1269577
Summary: | [RFE] Support more than six Virtio SCSI disks on a single bus controller | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Christian Horn <chorn> | ||||||
Component: | openstack-nova | Assignee: | Sahid Ferdjaoui <sferdjao> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Joe H. Rahme <jhakimra> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 7.0 (Kilo) | CC: | acanan, aguetta, berrange, chorn, dasmith, dgurtner, dyuan, eglynn, fpercoco, jdenemar, jhakimra, jtomko, kchamart, lyarwood, mtessun, nagata3333333, panbalag, rbalakri, sbauza, sclewis, scohen, sferdjao, sgordon, srevivo, vromanso, xuzhang | ||||||
Target Milestone: | Upstream M1 | Keywords: | FutureFeature, TestOnly, Triaged | ||||||
Target Release: | 13.0 (Queens) | ||||||||
Hardware: | All | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | openstack-nova-17.0.0-0.20180103233857.0010e23.2.el7ost | Doc Type: | Enhancement | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2018-06-27 13:26:22 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1203710, 1442136 | ||||||||
Attachments: |
|
Description
Christian Horn
2015-10-07 15:12:37 UTC
Additional info: - it seems that the most simple scsi standard limits to 8 devices. 16 seems to be possible, but it's unclead how to configure it in libvirt (and we eventually need this exposed in openstack ofcourse) - on https://access.redhat.com/articles/1436373 we state to support "Number of mounted volumes per host". How is this realized? - A para-storage-driver which allows the required number of devices, and virtio-scsi comparable performance, might also fullfill the request The application attaching the disk can specify the address if the defaults chosen by libvirt are not good enough, e.g.: <address type='drive' controller='0' bus='0' target='0' unit='25'/> This way more than 7 disks can be attached to a single virtio-scsi controller. See http://www.redhat.com/archives/libvir-list/2013-November/msg01114.html Auto-adding of controllers is a legacy "feature" and we should not be adding attributes to it. To have full control over the controller added, the application should add it separately. I don't think there's anything more libvirt can do about this. Is this available via virsh or only via libvirt? I did not see a way to specify the unit-number in "virsh attach-disk". Without specifying it (as also mentioned by you), the "unit" is just counted up to 7 and further disks are not attached. Also when doing "virsh edit rhel7u1a" and adding following disk, the disk is not appearing in the started KVM guest: <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/rhel7u1a.qcow2'/> <target dev='sdz' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='25'/> </disk> Disks with "sda" and "unit='1'" etc. are appearing. Fedora23 as host. Sorry for the noise, I understand from bz1173144 that attach-device is to be used. Investigating if we have enough with this to make the functionality available in openstack, so the new disks can be attached from the outside. From what I see, one to call - with a custom scsi-device-identifier (sdb, sdc or such) - and increase the the unit number We will have to think over wether this housekeeping should be done in glance (transparent to whoever calls glance) or have to make the info available to the outside and call glance with these parameters. Having it internally in libvirt would have taken off that burden, but lets see. Hello openstack-glance team, the overall goal of the request is in the description of this bz. We have established that libvirt is providing the capability of attaching more than 8 disks, using "attach-device". Can the functionality be made available via glance? It's unclear to me what the exact request from a glance point of view is. Glance has no knowledge of the hypervisor in use and what nova can/cannot do with it. It's possible to assign properties to Glance but those are dynamic and they are consumed by Nova (or any other consumers of images). Perhaps this should be moved to openstack-nova ? Hi, could we get an update on this, please? (In reply to Christian Horn from comment #9) > Hi, could we get an update on this, please? Hi Christian, From a libvirt perspective, this needs more testing with the below upstream libvirt fix (thanks to Daniel Berrangé for the pointer), which is part of the just-released (09DEC2015) libvirt-1.3.0. http://libvirt.org/git/?p=libvirt.git;a=commit;h=105794c -- qemu: Automatic SCSI controller creation in SCSI disk hotplug broken Thank you, Kashyap. So using this patch, what would be the idea on libvirt level to approach this? One would be able to always use the same command to attach devices, and libvirt would then if required create new scsi-type controllers by itself? Can you give an example how to correctly trigger that attach operation using virsh? With this I would then try the functionality (compiling 1.3 on fedora, or waiting until fedora updates to 1.3 codebase). Would then need to request the code into RHEL7 (via a rebase or a port), and then consider if we need to change something in glance to trigger this via openstack API. Hello, any updates to go forward here? Just to summarize: The current consensus from libvirt developers is that to have more than six SCSI drives on a single bus, then one has to manually specify those granular details (like 'unit' in the <address> sub-element for each disk added). Using 'attach-device' the below way -- which manually specifies the 'unit' attribute (and increments it for each disk) for the 'address' sub-element (which describes where the device is placed on the virtual bus that is presented to the guest) works if you want more than 6 disks attached to a single controller. I have attached a simple script (create-scsi-disks.bash) that creates 10 raw disk images and constructs a drive XML that attaches all the 10 disks to a single SCSI controller ("0"). Tested with: libvirt-daemon-kvm-1.3.5-2.fc24.x86_64 qemu-system-x86-2.6.0-3.fc24.x86_64 And then you can see all the 11 disks attached to the *same* controller (controller='0'): $ sudo virsh dumpxml cvm1 | egrep -i 'target|controller.*scsi' <target dev='vda' bus='virtio'/> <target dev='sd' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> <target dev='sdc' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> <target dev='sde' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='5'/> <target dev='sdf' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='6'/> <target dev='sdg' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='7'/> <target dev='sdh' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='8'/> <target dev='sdi' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='9'/> <target dev='sdj' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='10'/> <target dev='sdk' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='11'/> [...] Created attachment 1170826 [details]
Script to create 10 SCSI disks to a single SCSI controller
Created attachment 1171901 [details]
Script to create 10 SCSI disks to a single SCSI controller
Kashyap, what are the next steps here for Nova? [Sigh, my browser crashed the other day, and my comment didn't make it through.] Steve, Will work with Vladik to resolve this. And the current thinking on this is to find a way here, to store the SCSI controller ID in the instance; perhaps 'system_metadata' (in discussion with Vladik). Once done, we could let more 'virti-scsi' devices to this specific controller. (In reply to Kashyap Chamarthy from comment #27) > [Sigh, my browser crashed the other day, and my comment didn't make it > through.] > > Steve, > > Will work with Vladik to resolve this. > > And the current thinking on this is to find a way here, to store the SCSI > controller ID in the instance; perhaps 'system_metadata' (in discussion with > Vladik). Once done, we could let more 'virti-scsi' devices to this specific > controller. I'm assuming at this point that has to be a Pike specification? (In reply to Stephen Gordon from comment #28) > (In reply to Kashyap Chamarthy from comment #27) > > [Sigh, my browser crashed the other day, and my comment didn't make it > > through.] > > > > Steve, > > > > Will work with Vladik to resolve this. > > > > And the current thinking on this is to find a way here, to store the SCSI > > controller ID in the instance; perhaps 'system_metadata' (in discussion with > > Vladik). Once done, we could let more 'virti-scsi' devices to this specific > > controller. > > I'm assuming at this point that has to be a Pike specification? Yes, you're right. Hi Kashyap, Is there a draft specification up for this? Thanks, Steve I probably missed something but I do not see a feature here, at least there is one to give capacity for users to select the controller used by a device but first at all we have a bug to fix. After discussing with Vladik we conclude that the root cause is in Nova, when using a virtio-iscsi controller we should specify the index of the controller which should be used by the device. A virtio-iscsi controller is able to handle 256 devices so for a guest using such iscsi model only one controller is necessary. I think we should rename this issue to something like: "libvirt should allow up to 6 devices attached to a virtio-iscsi controller" Patches pushed upstream: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1686116 Moving to Queens, while the code may still land upstream in Pike it is not clear we will have time to complete verification. The code is merged upstream: https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bug/1686116 Christian can you confirm which version of RHOSP is now in use at the relevant customer sites? (In reply to Stephen Gordon from comment #43) > Christian can you confirm which version of RHOSP is now in use at the > relevant customer sites? On base of #45, I think that no porting to releases should be done based on this bz, the functionality will eventually come into RHOS with a new major version. Given the code for this landed in Pike, this would become TestOnly if verification falls to later releases. The fix has been merged upstream for Pike and backported on stable/ocata [0]. I'm updating the target release to Pike and set the status to MODIFIED. [0] https://review.openstack.org/#/q/status:merged+project:openstack/nova+topic:bug/1686116 For future time travelers, there was a check somewhere else that still limited all buses to 26 devices. Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1583553 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086 |