Bug 1270869
| Summary: | RHOS does not properly detect VirtIO storage | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Jason Montleon <jmontleo> |
| Component: | openstack-ironic-discoverd | Assignee: | Dmitry Tantsur <dtantsur> |
| Status: | CLOSED WONTFIX | QA Contact: | yeylon <yeylon> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 (Kilo) | CC: | apevec, lhh, lmartins, mburns, ochalups, rhel-osp-director-maint, srevivo, yeylon |
| Target Milestone: | --- | ||
| Target Release: | 7.0 (Kilo) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
Cause: the introspection ramdisk assumes SATA disk type and does not detect VirtIO disks
Consequence: VirtIO disks are not detected
Workaround (if any): temporary switch VirtIO disks to SATA before introspection
Result: introspection will succeed and disks can be changed back before deployment
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-02-09 14:17:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1273561 | ||
|
Description
Jason Montleon
2015-10-12 14:50:54 UTC
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/chap-Requirements.html#sect-Environment_Requirements "It is recommended to use bare metal systems for all nodes. At minimum, the Compute nodes require bare metal systems." Hi! It's a known issue for OSPd7. The recommended workaround is to use SATA emulation for introspection, then switch nodes back to VirtIO. This issue does not affect OSPd8. I'm sorry, but fixing the introspection ramdisk for OSPd7 is too risky at this point, provided that an easy workaround exists for this issue. Please let us know if you encounter similar problems with OSPd8. |