Bug 692190
Summary: | error while bulding simple f13 template, libvirt_xml = self.guest.install(self.app_config["timeout"]) | ||
---|---|---|---|
Product: | [Retired] CloudForms Cloud Engine | Reporter: | wes hayutin <whayutin> |
Component: | aeolus-conductor | Assignee: | Chris Lalancette <clalance> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | wes hayutin <whayutin> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 0.3.1 | CC: | akarol, dajohnso, deltacloud-maint, dgao, morazi, ssachdev |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
URL: | http://dhcp231-29.rdu.redhat.com/conductor/image_factory/templates/1?details_tab=builds | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | Type: | --- | |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 684278 |
Description
wes hayutin
2011-03-30 16:21:50 UTC
recreate: 1. install 2. create provider account 3. create f13 template w/o packages 4. build I believe this is slated to be addressed with a newer rev of Oz than we are currently carrying, but would like Chris to confirm. I haven't done anything in particular to address this one. While the error message is a little opaque, it actually shows what the problem is: the host that this was run on does not support HVM guests (i.e. KVM). If this is a machine that doesn't support KVM at all, then it more-or-less can't be used to do builds. If KVM is disabled in the BIOS, then it needs to be enabled. Chris Lalancette [root@dhcp231-29 ~]# cat /proc/cpuinfo | grep vmx flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow vnmi flexpriority flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow vnmi flexpriority flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow vnmi flexpriority flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow vnmi flexpriority [root@dhcp231-29 ~]# While its possible that its turned off in the bios.. very true.. Going to try and install a virt guest by hand, which I have some several times on this hardware. Its possible some kind of config has changed on this hardware since then. Thanks for looking into this bug. [root@dhcp231-29 ~]# [root@dhcp231-29 ~]# # virt-install -n auto-rhq -r 2046 -f /var/lib/libvirt/images/auto-rhq.img -s 30 --vnc -l http://vault.rhndev.redhat.com/engarchive2/released/F-12/GOLD/Fedora/i386/os/ -x "noipv6 ks=http://whayutin.rdu.redhat.com/ks/ks.cfg" --bridge=virbr0 Fails w/ Mar 30 19:37:37 localhost libvirtd: 19:37:37.912: error : qemudDomainLookupByName:4403 : Domain not found: no domain with matching name 'auto-rhq' Mar 30 19:37:37 localhost libvirtd: 19:37:37.918: error : storageVolumeLookupByPath:1241 : Storage volume not found: no storage vol with matching path Mar 30 19:37:37 localhost libvirtd: 19:37:37.986: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed Mar 30 19:37:37 localhost libvirtd: 19:37:37.989: error : storageVolumeLookupByPath:1241 : Storage volume not found: no storage vol with matching path Mar 30 19:37:37 localhost libvirtd: 19:37:37.992: error : storageVolumeLookupByPath:1241 : Storage volume not found: no storage vol with matching path Mar 30 19:37:37 localhost libvirtd: 19:37:37.995: error : storageVolumeLookupByName:1151 : Storage volume not found: no storage vol with matching name 'auto-rhq.img' Mar 30 19:37:38 localhost kernel: [16030.036493] [IPTABLES] INPUT : IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:23:ae:79:43:4f:08:00 SRC=10.11.228.35 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=17500 DPT=17500 LEN=131 Mar 30 19:37:58 localhost kernel: [16050.467380] [IPTABLES] INPUT : IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:a4:ba:db:87:98:ea:08:00 SRC=0.0.0.0 DST=255.255.255.255 LEN=300 TOS=0x00 PREC=0x00 TTL=128 ID=50474 PROTO=UDP SPT=68 DPT=67 LEN=280 so Chris is correct.. something is wrong... I'm not convinced its hardware yet though k.. I've confirmed virt is enabled in the bios and libvirt was installed. Some other things I noticed. 1. I was able to start virt install after rebooting using virt-install -n auto-rhq -r 2046 -f /var/lib/libvirt/images/auto-rhq.img -s 30 --vnc -l http://vault.rhndev.redhat.com/engarchive2/released/F-12/GOLD/Fedora/i386/os/ -x "noipv6 ks=http://whayutin.rdu.redhat.com/ks/ks.cfg" --bridge=virbr0 The install would hang when trying to pull something from the network. Something must be wrong w/ the default network interface virb0. On some hardware we create our own network bridge, and I wonder if there is a issue there. 2. virt was enabled in the bios So w/ virt enabled in the bios, using a cpu w/ vmx flags and libvirt installed was not enough to successfully create a guest.. Not sure why yet.. that should be enough OK.. Chris found the root cause here.. https://bugzilla.redhat.com/show_bug.cgi?id=692558 *** Bug 692747 has been marked as a duplicate of this bug. *** oot@ibm-x3950m2-01 noarch]# [root@ibm-x3950m2-01 noarch]# cat /var/log/aeolus-connector.log | grep -i complete D, [2011-06-23T13:56:58.581322 #10459] DEBUG -- : GOT AN EVENT: redhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1, percent_complete100eventPERCENTAGEaddr_object_namebuild_adaptor:build_image:8809de31-c008-457b-8578-f743948eac2c_agent_nameredhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1 at 1308851818579773033 D, [2011-06-23T13:56:58.591694 #10459] DEBUG -- : GOT AN EVENT: redhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1, old_statusNEWeventSTATUSnew_statusCOMPLETEDaddr_object_namebuild_adaptor:build_image:8809de31-c008-457b-8578-f743948eac2c_agent_nameredhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1 at 1308851818585917097 D, [2011-06-23T13:56:58.632069 #10459] DEBUG -- : Data: #<FactoryRestHandler::EventData:0x7f2f6ebee818 @uuid="8809de31-c008-457b-8578-f743948eac2c", @obj="build_image", @value="completed", @event="STATUS"> D, [2011-06-23T13:57:44.712984 #10459] DEBUG -- : GOT AN EVENT: redhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1, percent_complete100eventPERCENTAGEaddr_object_namebuild_adaptor:build_image:2da7c4ff-e68c-4fdb-8174-40f600bbcb6f_agent_nameredhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1 at 1308851864708022162 D, [2011-06-23T13:57:44.714291 #10459] DEBUG -- : GOT AN EVENT: redhat.com:imagefactory:848fa44e-3eaf-4821-be1e-a351c2ddf8c1, old_statusNEWeventSTATUSnew_statusCOMPLETEDaddr_obje [root@ibm-x3950m2-01 noarch]# rpm -qa | grep aeolus aeolus-conductor-0.3.0-0.el6.20110623205403git551632a.noarch aeolus-configure-2.0.1-0.el6.20110622123902gitdf4ae05.noarch aeolus-conductor-doc-0.3.0-0.el6.20110623205403git551632a.noarch aeolus-all-0.3.0-0.el6.20110623205403git551632a.noarch rubygem-aeolus-cli-0.0.1-1.el6.20110623205403git551632a.noarch aeolus-conductor-daemons-0.3.0-0.el6.20110623205403git551632a.noarch aeolus-conductor-devel-0.3.0-0.el6.20110623205403git551632a.noarch release pending... release pending... closing out old bugs perm close |