Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
rubygem-staypuft: Hosts don't get registered in staypuft and don't appear in the "discovered hosts" list. Environment: openstack-puppet-modules-2014.1-24.el6ost.noarch rhel-osp-installer-0.4.5-1.el6ost.noarch ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch openstack-foreman-installer-2.0.30-1.el6ost.noarch ruby193-rubygem-staypuft-0.4.8-1.el6ost.noarch Steps to reproduce: 1. Install rhel-osp-installer. 2. pxeboot the machines to register in staypuft. Result: Occasionally bare metal and VMs don't get registered. Looking at the console of the booting machines - the following message is shown: Could not send facts to foreman Expected result: All the hosts should get registered and appear under the discoveered machines.
In the VM setup I was able to register the VMs after switching from the rtl8139 driver to virtio. With that said the vNIC used for PXE was already configured with virtio and rtl8139 driver was configured on other vNICs.
While not being able to register in staypuft, the booted VMs can reach the staypuft machine by IP/hostname with no issues and can be reached from it.
I hit the same issue on baremetal setup with Dell PowerEdge R320 machines. Message on the console, after "registering host with Foreman": Could not send facts to Foreman: getaddrinfo: Name or service not known
https://github.com/theforeman/foreman-installer-staypuft/pull/105
Alex, can you describe me full VM setup how to reproduce this bug? I tried almost all cases and it always boots fine for me. Please describe me your setup in full detail: NICs, booting NIC, drivers, networks (DHCP, network, netmask, NAT or isolated). Thanks.
While my VMs had 3 vNICs - I didn't experince the issue. Nic1 = virtio (PXE/management) - internal DHCP. isolated. Nic2 = rtl8139 (tenant) - there's a DHCP in the virtual network, but in staypuft I configure the network to use internal DB. Isolated. Nic3 = rtl8139 (external) - uses DHCP, bridged outside * No bondings. After I added one more vNIC (doesn't matter if it's virtio or rtl8139) - started hitting the issue. After the issue appeared - I tried to change all drivers to virtio and then all to to rtl8139 - no luck. If I remove the 4th NIC and revert the changes - the issue goes away.
doc text not required, this was an internally found and fixed bz
Verified: ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch rhel-osp-installer-0.4.5-2.el6ost.noarch openstack-puppet-modules-2014.1-24.1.el6ost.noarch openstack-foreman-installer-2.0.31-1.el6ost.noarch ruby193-rubygem-staypuft-0.4.10-1.el6ost.noarch The reported issue doesn't reproduce in the same bare metal setup it reproduced earlier.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-1800.html