Bug 1759603

Summary: Stuck on waiting for domain to get an ip address
Product: [Fedora] Fedora Reporter: Mattia Verga <mattia.verga>
Component: kernelAssignee: Kernel Maintainer List <kernel-maint>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 29CC: airlied, bskeggs, hdegoede, ichavero, itamar, jarodwilson, jeremy, jglisse, john.j5live, jonathan, josef, kernel-maint, linville, lmohanty, madam, masami256, mchehab, mjg59, pvalena, steved, strzibny, thrcka, vondruch
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-09 17:01:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Guest machine error messages
none
error running timeout 60 vagrant up --debug none

Description Mattia Verga 2019-10-08 16:11:11 UTC
Created attachment 1623516 [details]
Guest machine error messages

On a fresh, fully updated, F31 machine, when I try to start ('vagrant up') a vagrant box from fedora-bodhi Vagrantfile, the process gets stuck on 'bodhi: Waiting for domain to get an IP address...' forever.

BUT, if I open the virtual machine from Virtual Machine Manager and I move my mouse over the window, the guest machine shows the errors in the attached screenshot and the 'vagrant up' process moves on.

Comment 1 Pavel Valena 2019-10-09 12:34:17 UTC
Hello, thanks for opening the Bug!

> On a fresh, fully updated, F31 machine, when I try to start ('vagrant up') a vagrant box from fedora-bodhi Vagrantfile, the process gets stuck on 'bodhi: Waiting for domain to get an IP address...' forever.
>
> BUT, if I open the virtual machine from Virtual Machine Manager and I move my mouse over the window, the guest machine shows the errors in the attached screenshot and the 'vagrant up' process moves on.

So it's probably stuck on 'ssh is not available', could you login into the VM and `sudo systemctl status sshd`?
'7 urandom warnings' error seems to me like there may not be enough 'randomness' to start services requiring it, which is non-related to vagrant. Moving a 'mouse' => generating randomness => unstucking service seems to correspond.

Possibly related to https://bugzilla.redhat.com/show_bug.cgi?id=1572916.

In case the sshd is not the cause:
  Are you using VirtualBox or Libvirt?
  Do other VMs boot as usual?
  Could you try other boxes / Vagrantfiles?
  Could you please attach result of `timeout 60 vagrant up --debug`?
  What Vagrantfile are you using, specifically?

Comment 2 Mattia Verga 2019-10-09 16:28:09 UTC
Hi,
I'm using the Vagrantfile provided in https://raw.githubusercontent.com/fedora-infra/bodhi/develop/Vagrantfile under Libvirt.

I think you're right about sshd, because even if I don't move the pointer on the VM window, as soon as I start typing the username to login the machine gets unstuck. So, I cannot see sshd status when the machine is stuck.

I can't run `timeout 60 vagrant up --debug` because I got the attached error.

I have no other Vagrantfiles, only one other VM which runs Fedora Rawhide and it boots without problems.
But I've tried to change the base image in Bodhi Vagrantfile from F29 to F30 and the F30 base image starts without manual intervention (after a couple of minutes).
So maybe it's the kernel in the F29Cloud Base Image that it's affected by the bug in the kernel you pointed out.

Comment 3 Mattia Verga 2019-10-09 16:29:10 UTC
Created attachment 1623860 [details]
error running timeout 60 vagrant up --debug

Comment 4 Pavel Valena 2019-10-09 17:01:40 UTC
Well, I don't think this should to be used in Vagrantfile:

```
opts = GetoptLong.new(

```

Anyway, closing, as this is not vagrant-related issue and it was probably solved in the attached ticket.

*** This bug has been marked as a duplicate of bug 1572916 ***