Description of problem: Name resoultion fails in several services in the installation environment when attempting to install Rawhide (March 18 and 19 2009) in a KVM guest on F10 using virt-manager. Virt-manager was configured to use virtual networking (NAT, not bridging), storage was a logical volume (configured as disk partition). Problem manifests itself in two ways: ssh/scp - host resolution doesn't work i.e. ssh hostname will give "hostname" No such File or Directory Anaconda will fail with a missing repomd.xml file just after storage configuration. Retrying with substituting the IP with the hostname works around the problem in both cases. Version-Release number of selected component (if applicable): F10 HostOS: [root@test185 tmp]# rpm -qa | grep virt virt-top-1.0.1-4.fc9.x86_64 libvirt-0.5.1-2.fc9.x86_64 python-virtinst-0.400.0-1.fc9.noarch virt-manager-0.6.0-1.fc9.x86_64 libvirt-python-0.5.1-2.fc9.x86_64 [root@test185 tmp]# uname -r 2.6.27.19-78.2.30.fc9.x86_64 [root@test185 tmp]# rpm -q kvm kvm-65-15.fc9.x86_64 F11 guest using rawhide-20090318 or rawhide-20090319 How reproducible: always Steps to Reproduce: 1.Set up install of rawhide on F10 with virt-manager use virtual network install to partition/lvm based backing storage (probably not relevant) use http based repo/install method. 2. 3. Actual results: Install will initially work fine, repo will be found, second stage downloaded etc. The problem occurs after storage configuration where anaconda tries to set up the repo for installation. Expected results: installation proceeds normally. Additional info: F10 GOLD will install without issue in this same configuration.
One other thing to add to problem description: The ping utility in the install environment (busybox?) will resolve hostnames without issue. Ssh however fails with above mentioned error message.
Created attachment 335913 [details] anaconda install logs from 3-19 rawhide (sucessfull install with workaround)
Created attachment 335916 [details] xml definition of the installed F11 guest
Do you have a /etc/resolv.conf file?
I didn't save the resolv.conf file from the installation attempt, but it looks exactly like the one on the installed system with just 2 lines. The first is a comment that says generated by networkmanager, second was a single nameserver line pointing to 192.168.122.1 which correspond to the dhcp server provided by the host OS (for the kvm guests). The resolv.conf file from the F10 guest install I attempted looks to be the same.
ok, I have the same problem but it is on bare metal. I get: WARNING : Try 1/10 for http://download.fedora.redhat.com/pub/fedora/linux/development/x86_64/os/repodata/repomd.xml failed repeat 10 times WARNING : Failed to get http://download.fedora.redhat.com/pub/fedora/linux/development/x86_64/os/repodata/repomd.xml from mirror 1/1, or downloaded file is corrupt switch over to vt-2 $ssh download.fedora.redhat.com ssh: Cound not resolve hostname download.fedora.redhat.com: No such file or directory $ping download.fedora.redhat.com ***works just fine*** $wget http://download.fedora.redhat.com/pub/fedora/linux/development/x86_64/os/repodata/repomd.xml ***works just fine*** /etc/resolv.conf is not interesting contains only # Generated by NetworkManager nameserver 192.168.1.1
interesting tidbit: $ssh people.redhat.com ssh: Could not resolve hotstname people.redhat.com: No such file or directory $ssh -6 people.redhat.com ssh: Could not resolve hotstname people.redhat.com: No such file or directory $ssh -4 people.redhat.com ssh: connect to host people.redhat.com port 22: No route to host The IPv6 module from the kernel is NOT loaded and none of my interfaces have IPv6 addresses. (just loading the ipv6 kernel module does not do anything though)
eparis: do you still see this in bare metal scenarios? mgahagan: Same for virt guests? markmc: Any known issues around using F10 NAT to install F11 in the virt space?
This is working for me now, install is currently in progress and is well past the part where it failed for me before.
I haven't seen the problem since the day I reported it.....
Closing per comments #9 and #10. Thanks for the updates everyone.