Bug 1252437
| Summary: | [Discovery] Gathers wrong information about disks available | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Dariusz Smigiel <dariusz.smigiel> |
| Component: | openstack-ironic-discoverd | Assignee: | Dmitry Tantsur <dtantsur> |
| Status: | CLOSED ERRATA | QA Contact: | Toure Dunnon <tdunnon> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.0 (Kilo) | CC: | apevec, calfonso, dariusz.smigiel, dmacpher, ealcaniz, lhh, mburns, mcornea, rhel-osp-director-maint |
| Target Milestone: | y1 | Keywords: | Triaged, ZStream |
| Target Release: | 7.0 (Kilo) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-ironic-discoverd-1.1.0-6.el7ost | Doc Type: | Bug Fix |
| Doc Text: |
The inspection process picked a random root disk to report as local_gb. This often returned the wrong local_gb value, which would differ from run to run on machines with multiple hard disks. This fix sorts the order of the disks before picking the first one. The inspection process now provides a consistent local_gb value.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-10-08 12:16:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Dariusz Smigiel
2015-08-11 12:14:00 UTC
Hi! What's the way to distinguish between "correct" and "incorrect" disks in your case? Currently discoverd just takes the 1st available disk, we need some hints if we have to change this. We've talked on IRC a bit, and I think for RHOS 7 we should just sort disks list, so that /dev/sda goes first. For RHOS 8 we need to support ironic root device hints in inspector. Blueprint https://blueprints.launchpad.net/ironic-inspector/+spec/root-device-hints In a virtual environment a node with 4 additional disks reports u'local_gb': u'9':
[root@overcloud-controller-2 ~]# fdisk -l | grep Disk | grep sd..
Disk /dev/sda: 44.0 GB, 44023414784 bytes, 85983232 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
[stack@instack ~]$ ironic node-show 534f9bd7-3f4b-4838-934f-488bcd9cb6dc | grep local_gb
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'9', |
[root@overcloud-compute-0 heat-admin]# fdisk -l | grep Disk | grep sd..
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 44.0 GB, 44023414784 bytes, 85983232 sectors
[stack@instack ~]$ ironic node-show 5109dc04-4592-4248-8bf5-d585ec16e7f2 | grep local_gb
| properties | {u'memory_mb': u'16384', u'cpu_arch': u'x86_64', u'local_gb': u'40', |
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1862 |